Herman Cappelen investigates how language and other representational devices can go wrong, and how to fix them. We use language to understand and talk about the world, but what if our language has deficiencies that prevent it from playing that role? How can we revise our concepts, and what are the limits on revision?
The standard view of philosophical methodology is that philosophers rely on intuitions as evidence. Herman Cappelen argues that this claim is false: it is not true that philosophers rely extensively on intuitions as evidence. At worst, analytic philosophers are guilty of engaging in somewhat irresponsible use of 'intuition'-vocabulary. While this irresponsibility has had little effect on first order philosophy, it has fundamentally misled meta-philosophers: it has encouraged meta-philosophical pseudo-problems and misleading pictures of what philosophy is.
_Insensitive Semantics_ is an overview of and contribution to the debates about how to accommodate context sensitivity within a theory of human communication, investigating the effects of context on communicative interaction and, as a corollary, what a context of utterance is and what it is to be in one. Provides detailed and wide-ranging overviews of the central positions and arguments surrounding contextualism Addresses broad and varied aspects of the distinction between the semantic and non-semantic content of language Defends a distinctive (...) and explanatorily powerful combination of semantic minimalism and speech act pluralism Confronts core problems which not only run to the heart of philosophy of language and linguistics, but which arise in epistemology, metaphysics, and moral philosophy as well. (shrink)
Cappelen and Hawthorne present a powerful critique of fashionable relativist accounts of truth, and the foundational ideas in semantics on which the new relativism draws. They argue compellingly that the contents of thought and talk are propositions that instantiate the fundamental monadic properties of truth and falsity.
Herman Philipse puts forward a powerful new critique of belief in God. He examines the strategies that have been used for the philosophical defence of religious belief, and by careful reasoning casts doubt on the legitimacy of relying on faith instead of evidence, and on probabilistic arguments for the existence of God.
The beginning of the twenty-first century saw something of a comeback for relativism within analytical philosophy. Relativism and Monadic Truth has three main goals. First, we wished to clarify what we take to be the key moving parts in the intellectual machinations of self-described relativists. Secondly, we aimed to expose fundamental flaws in those argumentative strategies that drive the pro-relativist movement and precursors from which they draw inspiration. Thirdly, we hoped that our polemic would serve as an indirect defence of (...) a traditional and natural picture concerning truth. According to this picture, what we call ‘Simplicity’, the fundamental structure of semantic reality is best revealed by construing truth as a simple monadic property of propositions that serve as the objects of belief, assertion, meaning and agreement. Our project was not a straightforward one. So-called relativists are not uniform in their key ideology, are often sloppy, casual, obscure or confused in their self-characterization, and differ in their argumentative emphasis among themselves and over time, thereby presenting a target that is both amorphous and shifty. This is an area where parties will frequently claim not to understand each other and where certain parties will sometimes accuse others of failing to make any sense at all. In such a situation, any effort to impose order will inevitably strike some parties as tendentious and unfair. That said, we felt that we had enough of …. (shrink)
I call the activity of assessing and developing improvements of our representational devices ‘conceptual engineering’.¹ The aim of this chapter is to present an argument for why conceptual engineering is important for all parts of philosophy (and, more generally, all inquiry). Section I of the chapter provides some background and defines key terms. Section II presents the argument. Section III responds to seven objections. The replies also serve to develop the argument and clarify what conceptual engineering is.
Conceptual engineering and conceptual ethics are branches of philosophy concerned with questions about how to assess and ameliorate our representational devices (such as concepts and words). It's a part of philosophy concerned with questions about which concepts we should use (and why), how concepts can be improved, when concepts should be abandoned, and how proposals for amelioration can be implemented. Central parts of the history of philosophy have engaged with these issues, but the focus of this volume is on applications (...) to work in contemporary philosophy of language and mind, epistemology, gender and race theory, ethics, philosophy of science, and philosophical logic. This is the first volume devoted entirely to conceptual engineering and conceptual ethics. The volume explores the possibilities, benefits, problems, and applications of conceptual engineering and conceptual ethics. It consists of twenty chapters written by leading philosophers. (shrink)
God in the Age of Science? is a critical examination of strategies for the philosophical defence of religious belief. Herman Philipse argues that the most promising for believers who want to be justified in accepting their creed in our scientific age is the Bayesian cumulative case strategy developed by Richard Swinburne, and goes on to present an in-depth analysis of this case for theism. Using a 'strategy of subsidiary arguments', Philipse concludes that theism cannot be stated meaningfully; that if (...) theism were meaningful, it would have no predictive power concerning existing evidence, so that Bayesian arguments cannot get started; and that if the Bayesian cumulative case strategy did work, one should conclude that atheism is more probable than theism. Philipse provides a careful, rigorous, and original critique of atheism in the world today. (shrink)
Cappelen and Dever present a forceful challenge to the standard view that perspective, and in particular the perspective of the first person, is a philosophically deep aspect of the world. Their goal is not to show that we need to explain indexical and other perspectival phenomena in different ways, but to show that the entire topic is an illusion.
This volume unites various contributions reflecting the intellectual interests exhibited by Professor Herman Parret, who has continued to observe, and often critically assess, ongoing developments in pragmatics throughout his career. In fact, Parret's contributions to philosophical and empirical/linguistic pragmatics present substantive proposals in the epistemics of communication, while simultaneously offering meta-comments on the ideological premises of extant pragmatic analyses. In a lengthy introduction, an overview is provided of his achievements in promoting an integrated, "maximalist" pragmatics, as well as of (...) the links between his own work in philosophy of language and in semiotics and aesthetics. The remaining 12 essays address relevant pragmatic themes or look into the relation between pragmatics and neighboring disciplines. They deal with grammatical deixis and mood, performativity, speech-act types and their praxeological dimensions, Wittgensteinian language games, cultural and intercultural identities, and the visual arts. (shrink)
The view defended in this paper - I call it the No-Assertion view - rejects the assumption that it is theoretically useful to single out a subset of sayings as assertions: (v) Sayings are governed by variable norms, come with variable commitments and have variable causes and effects. What philosophers have tried to capture by the term 'assertion' is largely a philosophers' invention. It fails to pick out an act-type that we engage in and it is not a category we (...) need in order to explain any significant component of our linguistic practice. Timothy Williamson (2000) defends a theory of type (i). He says that a theory of assertion has as its goal "[…] that of articulating for the first time the rules of a traditional game that we play" (p. 240). Among those who think we play the game of assertion, there's disagreement about what the rules are. Some think it's a single rule and disagree about what that rule is. Others think the rules change across contexts. According to the No-Assertion view we don’t play the assertion game. The game might exist as an abstract object, but it is not a game you need to learn and play to become a speaker of a natural language. (shrink)
The purpose of this essay is to determinewhat exactly is meant by the claimcomputer ethics is unique, a position thatwill henceforth be referred to as the CEIUthesis. A brief sketch of the CEIU debate is provided,and an empirical case involving a recentincident of cyberstalking is briefly consideredin order to illustrate some controversialpoints of contention in that debate. To gain aclearer understanding of what exactly isasserted in the various claims about theuniqueness of computer ethics, and to avoidmany of the confusions currently (...) associatedwith the term ``unique'', a precise definition ofthat term is proposed. We then differentiatetwo distinct and radically differentinterpretations of the CEIU thesis, based onarguments that can be found in the relevantcomputer ethics literature. The twointerpretations are critically analyzed andboth are shown to be inadequate in establishingthe CEIU thesis. We then examine and reject twoassumptions implicit in arguments advanced bothby CEIU advocates and their opponents. Inexposing and rejecting these assumptions, wesee why it is not necessary to accept theconclusions reached by either side in thisdebate. Finally, we defend the view thatcomputer ethics issues are both philosophicallyinteresting and deserving of our attention,regardless of whether those issues might alsohappen to be unique ethical issues. (shrink)
By focussing on the logical relations between scientific theories and religious beliefs in his book Where the Conflict Really Lies, Alvin Plantinga overlooks the real conflict between science and religion. This conflict exists whenever religious believers endorse positive factual claims to truth concerning the supernatural. They thereby violate an important rule of scientific method and of common sense, according to which factual claims should be endorsed as true only if they result from validated epistemic methods or sources.
This paper evaluates arguments presented by John Perry (and Ken Taylor) in favor of the presence of an unarticulated constituent in the proposition expressed by utterance of, for example, (1):1 1. It's raining (at t). We contend that these arguments are, at best, inconclusive. That's the critical part of our paper. On the positive side, we argue that (1) has as its semantic content the proposition that it is raining (at t) and that this is a location-neutral proposition. According to (...) the view we propose, an audience typically looks for a location when they hear utterances of (1) because their interests in rain are location- focused: it is the location of rain that determines whether we get wet, carrots grow, and roads become slippery. These are, however, contingent facts about rain, wetness, people, carrots, and roads – they are not built into the semantics for the verb 'rain'. (shrink)
Bad Language is the first textbook on an emerging area in the study of language: non-idealized language use, the linguistic behaviour of people who exploit language for malign purposes. This lively, accessible introduction offers theoretical frameworks for thinking about such topics as lies and bullshit, slurs and insults, coercion and silencing.
This is the first book devoted to the question of how language can be used to talk about language. Cappelen and Lepore examine the semantics, the pragmatics, and the syntax of linguistic devices that can be used in this way, and present a new account of our use of quotation in a variety of different contexts.
Language Turned on Itself examines what happens when language becomes self-reflexive; when language is used to talk about language. Those who think, talk, and write about language are habitual users of various metalinguistic devices, but reliance on these devices begins early: kids are told, 'That's called a "rabbit"'. It's not implausible that a primitive capacity for the meta-linguistic kicks in at the beginning stages of language acquisition. But no matter when or how frequently these devices are invoked, one thing is (...) clear: they present theorists of language with a complex data pattern. Herman Cappelen and Ernest Lepore show that the study of these devices and patterns not only represents an interesting and neglected project in the philosophy of language, but also carries important consequences for other parts of philosophy. Part I is devoted to presenting data about various aspects of our metalinguistic practices. In Part II, the authors examine and reject the four leading metalinguistic theories, and offer a new account of our use of quotation in a variety of different contexts. But the primary goal of this book is not to promote one theory over another. Rather, it is to present a deeply puzzling set of problems and explain their significance. (shrink)
One central purpose of Experimental Philosophy (hereafter, x-phi) is to criticize the alleged reliance on intuitions in contemporary philosophy. In my book Philosophy without Intuitions (hereafter, PWI), I argue that philosophers don’t rely on intuitions. If those arguments are good, experimental philosophy has been engaged in an attack on a strawman. The goal of this paper is to bolster the criticism of x-phi in the light of responses.
In the quest for identity and healing, what belongs to the humanities and what to clinical psychology? Ginette Paris uses cogent and passionate argument as well as stories from patients to teach us to accept that the human psyche seeks to destroy relationships and lives as well as to sustain them. This is very hard to accept which is why, so often, the body has the painful and dispiriting job of showing us what our psyche refuses to see. In (...) jargon-free language, the author describes her own story of taking a turn downwards and inwards in the search for a metaphorical personal 'death'. If this kind of mortality is not attended to, then more literal bodily ailments and actual death itself can result. Paris engages with one of the main dilemmas of contemporary psychology and psychotherapy: how to integrate findings and insights from neuroscience and medicine into an approach to healing founded upon activation of the imagination. At present, she demonstrates, what is happening is damaging to both science and imagination. (shrink)
There are at least four varieties of quotation, including pure, direct, indirect and mixed. A theory of quotation, we argue, should give a unified account of these varieties of quotation. Mixed quotes such as 'Alice said that life is 'difficult to understand'', in which an utterance is directly and indirectly quoted concurrently, is an often overlooked variety of quotation. We show that the leading theories of pure, direct, and indirect quotation are unable to account for mixed quotation and therefore unable (...) to provide a unified theory. In the second half of the paper we develop a unified theory of quotation based on Davidson's demonstrative theory. 'Language is the instrument it is because the same expression, with semantic features (meaning) unchanged, can serve countless purposes.' (Davidson 1968). (shrink)
This paper examines the question whether, and to what extent, John Locke’s classic theory of property can be applied to the current debate involving intellectual property rights (IPRs) and the information commons. Organized into four main sections, Section 1 includes a brief exposition of Locke’s arguments for the just appropriation of physical objects and tangible property. In Section 2, I consider some challenges involved in extending Locke’s labor theory of property to the debate about IPRs and digital information. In Section (...) 3, it is argued that even if the labor analogy breaks down, we should not necessarily infer that Locke’s theory has no relevance for the contemporary debate involving IPRs and the information commons. Alternatively, I argue that much of what Locke has to say about the kinds of considerations that ought to be accorded to the physical commons when appropriating objects from it – especially his proviso requiring that “enough and as good” be left for others – can also be applied to appropriations involving the information commons. Based on my reading of Locke’s proviso, I further argue that Locke would presume in favor of the information commons when competing interests (involving the rights of individual appropriators and the preservation of the commons) are at stake. In this sense, I believe that Locke offers us an adjudicative principle for evaluating the claims advanced by rival interests in the contemporary debate about IPRs and the information commons. In Section 4, I apply Locke’s proviso in my analysis of two recent copyright laws: the Copyright Term Extension Act (CTEA), and the Digital Millennium Copyright Act (DMCA). I then argue that both laws violate the spirit of Locke’s proviso because they unfairly restrict the access that ordinary individuals have previously had to resources that comprise the information commons. Noting that Locke would not altogether reject copyright protection for IPRs, I conclude that Locke’s classic property theory provides a useful mechanism for adjudicating between claims about how best to ensure that individuals will be able to continue to access information in digitized form, while at the same time also allowing for that information to enjoy some form of legal protection. (shrink)
A semantic theory T for a language L should assign content to utterances of sentences of L. One common assumption is that T will assign p to some S of L just in case in uttering S a speaker A says that p. We will argue that this assumption is mistaken.
This is the most comprehensive book ever published on philosophical methodology. A team of thirty-eight of the world's leading philosophers present original essays on various aspects of how philosophy should be and is done. The first part is devoted to broad traditions and approaches to philosophical methodology. The entries in the second part address topics in philosophical methodology, such as intuitions, conceptual analysis, and transcendental arguments. The third part of the book is devoted to essays about the interconnections between philosophy (...) and neighbouring fields, including those of mathematics, psychology, literature and film, and neuroscience. (shrink)
For some relativists some of the time the evidence for their view is a puzzling data pattern: On the one hand, there's evidence that the terms in question exhibit some kind of content stability across contexts. On the other hand, there's evidence that their contents vary from one context of use to another. The challenge is to reconcile these two sets of data. Truth relativists claim that their theory can do so better than contextualism and invariantism. Truth relativists, in effect, (...) use an argument to the best explanation: they present data they claim to be able to handle better than any competing theory2. (shrink)
This review confirms Herman’s work as a praiseworthy contribution to East-West and comparative philosophical literature. Due credit is given to Herman for providing English readers with access to Buber’s commentary on, a personal translation of, the Chuang-Tzu; Herman’s insight into the later influence of I and Thou on Buber’s understanding of Chuang-Tzu and Taoism is also appropriately commended. In latter half of this review, constructive criticisms of Herman’s work are put forward, such as formatting inconsistencies, a (...) tendency toward verbosity and jargon, and a neglect of seemingly important hermeneutical issues. Such issues, seemingly substantive but neglected by Herman, are the influence of Buber’s prior familiarity with Hasidic teachings on his encounter with Chuang-Tzu, as well as the prevalence of Hasidic and Taoist thought in Buber’s conception of good and evil. (shrink)
This paper addresses four issues: 1. What is nonsense? 2. Is nonsense possible? 3. Is nonsense actual? 4. Why do the answers to (1)–(3) matter, if at all? These are my answers: 1. A sentence (or an utterance of one) is nonsense if it fails to have or express content (more on ‘express’, ‘have’, and ‘content’ below). This is a version of a view that can be found in Carnap (1959), Ayer (1936), and, maybe, the early Wittgenstein (1922). The notion (...) I propose abstracts away from their favored (but wrong) theories of what meaning is. It is a notion of nonsense that can be appealed to by all semantic frameworks and all theories of what content is, but structurally it is just like e.g. Carnap’s. Nonsense, as I construe it, is accompanied by illusions of thought (and I think that was part of Carnap’s conception as well). 2. Yes. In particular, I examine three arguments for the impossibility of illusion of thought (which on my construal accompanies linguistic nonsense) and they are all unsound. 3. Yes. There might be a lot of nonsense, both in ordinary and theoretical speech. In particular, it is likely that much of contemporary philosophy consists of nonsense. Empirical work is required to determine just how much. 4. The struggle to avoid nonsense (and achieve meaningfulness) is at least as important as the struggle for truth. The avoidance of nonsense is a precondition not just for having a truth value but also for more important properties such as saying something interesting or kind. (shrink)
In Insensitive Semantics (2004), we argue for two theses – Semantic Minimalism and Speech Act Pluralism. In this paper, we outline our defense against two objections often raised against Semantic Minimalism. To get to that defense, we first need some stage setting. To that end, we begin with five stage setting sections. These lead to the first objection, viz., that it might follow from our view that comparative adjectives are context insensitive. We defend our view against that objection (not, as (...) you might expect, by denying that implication, but by endorsing it). Having done so, we address a second objection, viz., that Semantic Minimalism makes it difficult to see what role semantic content plays in communicative exchanges. We respond and end with a reversal, i.e., we argue that even though the second objection fails against us, it works against those who raise the objection. In particular, we show that Recanati ends up with a notion of communicated content that fails various tests for psychological reality. (shrink)
It is a fundamental feature of language that words refer to things. Much attention has been devoted to the nature of reference, both in philosophy and in linguistics. Puzzles of Reference is the first book to give a comprehensive accessible survey of the fascinating work on this topic from the 1970s to the present day. -/- Written by two eminent philosophers of language, Puzzles of Reference offers an up-to-date introduction to reference in philosophy and linguistics, summarizing ideas such as Kripke's (...) revolutionary theory and presenting the various challenges in a clear and accessible manner. As the text does not assume prior training in philosophy or linguistics, it is ideal for use as part of a philosophy of language course for philosophy students or for linguistics students. -/- Puzzles of Reference belongs to the series Contemporary Introductions to Philosophy of Language, in which each book provides an introduction to an important area of the philosophy of language, suitable for students at any level. (shrink)
In her very interesting ‘First-personal modes of presentation and the problem of empathy’, L. A. Paul argues that the phenomenon of empathy gives us reason to care about the first person point of view: that as theorists we can only understand, and as humans only evince, empathy by appealing to that point of view. We are skeptics about the importance of the first person point of view, although not about empathy. The goal of this paper is to see if we (...) can account for empathy without the ideology of the first person. We conclude that we can. (shrink)
The linguistic turn provided philosophers with a range of reasons for engaging in careful investigation into the nature and structure of language. However, the linguistic turn is dead. The arguments for it have been abandoned. This raises the question: why should philosophers take an interest in the minutiae of natural language semantics? I’ll argue that there isn’t much of a reason - philosophy of language has lost its way. Then I provide a suggestion for how it can find its way (...) again. (shrink)