Should a theory of meaning state what sentences mean, and can a Davidsonian theory of meaning in particular do so? Max Ko¨lbel answers both questions affirmatively. I argue, however, that the phenomena of non-homophony, non-truth-conditional aspects of meaning, semantic mood, and context-sensitivity provide prima facie obstacles for extending Davidsonian truth-theories to yield meaning-stating theorems. Assessing some natural moves in reply requires a more fully developed conception of the task of such theories than Ko¨lbel provides. A more developed conception is also (...) required to defend his positive answer to the first question above. I argue that, however Ko¨lbel might elaborate his position, it can’t be by embracing the sort of cognitivist account of Davidsonian semantics to which he sometimes alludes. (shrink)
Michael Devitt ([2006a], [2006b]) argues that, insofar as linguists possess better theories about language than non-linguists, their linguistic intuitions are more reliable. ( Culbertson and Gross  ) presented empirical evidence contrary to this claim. Devitt () replies that, in part because we overemphasize the distinction between acceptability and grammaticality, we misunderstand linguists’ claims, fall into inconsistency, and fail to see how our empirical results can be squared with his position. We reply in this note. Inter alia we argue (...) that Devitt's focus on grammaticality intuitions, rather than acceptability intuitions, distances his discussion from actual linguistic practice. We close by questioning a demand that drives his discussion—viz., that, for linguistic intuitions to supply evidence for linguistic theorizing, a better account of why they are evidence is required. (shrink)
Michael Devitt ([2006a], [2006b]) argues that, insofar as linguists possess better theories about language than non-linguists, their linguistic intuitions are more reliable. (Culbertson and Gross ) presented empirical evidence contrary to this claim. Devitt () replies that, in part because we overemphasize the distinction between acceptability and grammaticality, we misunderstand linguists' claims, fall into inconsistency, and fail to see how our empirical results can be squared with his position. We reply in this note. Inter alia we argue that Devitt's (...) focus on grammaticality intuitions, rather than acceptability intuitions, distances his discussion from actual linguistic practice. We close by questioning a demand that drives his discussion—viz., that, for linguistic intuitions to supply evidence for linguistic theorizing, a better account of why they are evidence is required. (shrink)
Princess Diana’s death was a tragedy that provoked mourning across the globe; the death of a homeless person, more often than not, is met with apathy. How can we account for this uneven distribution of emotion? Can it simply be explained by the prevailing scientific understanding? Uncovering a rich tradition beginning with Aristotle, The Secret History of Emotion offers a counterpoint to the way we generally understand emotions today. Through a radical rereading of Aristotle, Seneca, Thomas Hobbes, Sarah Fielding, and (...) Judith Butler, among others, Daniel M. Gross reveals a persistent intellectual current that considers emotions as psychosocial phenomena. In Gross’s historical analysis of emotion, Aristotle and Hobbes’s rhetoric show that our passions do not stem from some inherent, universal nature of men and women, but rather are conditioned by power relations and social hierarchies. He follows up with consideration of how political passions are distributed to some people but not to others using the Roman Stoics as a guide. Hume and contemporary theorists like Judith Butler, meanwhile, explain to us how psyches are shaped by power. To supplement his argument, Gross also provides a history and critique of the dominant modern view of emotions, expressed in Darwinism and neurobiology, in which they are considered organic, personal feelings independent of social circumstances. The result is a convincing work that rescues the study of the passions from science and returns it to the humanities and the art of rhetoric. (shrink)
On his death in 2007, Richard Rorty was heralded by the New York Times as “one of the world’s most influential contemporary thinkers.” Controversial on the left and the right for his critiques of objectivity and political radicalism, Rorty experienced a renown denied to all but a handful of living philosophers. In this masterly biography, Neil Gross explores the path of Rorty’s thought over the decades in order to trace the intellectual and professional journey that led him to that (...) prominence. The child of a pair of leftist writers who worried that their precocious son “wasn’t rebellious enough,” Rorty enrolled at the University of Chicago at the age of fifteen. There he came under the tutelage of polymath Richard McKeon, whose catholic approach to philosophical systems would profoundly influence Rorty’s own thought. Doctoral work at Yale led to Rorty’s landing a job at Princeton, where his colleagues were primarily analytic philosophers. With a series of publications in the 1960s, Rorty quickly established himself as a strong thinker in that tradition—but by the late 1970s Rorty had eschewed the idea of objective truth altogether, urging philosophers to take a “relaxed attitude” toward the question of logical rigor. Drawing on the pragmatism of John Dewey, he argued that philosophers should instead open themselves up to multiple methods of thought and sources of knowledge—an approach that would culminate in the publication of Philosophy and the Mirror of Nature , one of the most seminal and controversial philosophical works of our time. In clear and compelling fashion, Gross sets that surprising shift in Rorty’s thought in the context of his life and social experiences, revealing the many disparate influences that contribute to the making of knowledge. As much a book about the growth of ideas as it is a biography of a philosopher, Richard Rorty will provide readers with a fresh understanding of both the man and the course of twentieth-century thought. (shrink)
Stewart Shapiro’s book develops a contextualist approach to vagueness. It’s chock-full of ideas and arguments, laid out in wonderfully limpid prose. Anyone working on vagueness (or the other topics it touches on—see below) will want to read it. According to Shapiro, vague terms have borderline cases: there are objects to which the term neither determinately applies nor determinately does not apply. A term determinately applies in a context iff the term’s meaning and the non-linguistic facts determine that they do. The (...) non-linguistic facts include the “external” context: “comparison class, paradigm cases, contrasting cases, etc.” (33) But external-contextsensitivity is not what’s central to Shapiro’s contextualism. Even fixing external context, vague terms’ (anti-)extensions exhibit sensitivity to internal context: the decisions of competent speakers. According to Shapiro’s open texture thesis, for each borderline case, there is some circumstance in which a speaker, consistently with the term’s meaning and the non-linguistic facts, can judge it to fall into the term’s extension and some circumstance in which the speaker can judge it to fall into the term’s anti-extension: she can “go either way.” Moreover, borderline sentences are Euthyphronically judgment- dependent: a competent speaker’s judging a borderline to fall into a term’s (anti- )extension makes it so. For Shapiro, then, a sentence can be true but indeterminate: a case left unsettled by meaning and the non-linguistic facts (and thus indeterminate, or borderline) may be made true by a competent speaker’s judgment. Importantly, among the non-linguistic facts that constrain speakers’ judgments (at least in the cases Shapiro cares about) is a principle of tolerance: for all x and y, if x and y differ marginally in the relevant respect (henceforth, Mxy), then if one competently judges Bx, one cannot competently judge y in any other manner in the same (total) context.1 This does not require that one judge By: one might not consider the matter at all.. (shrink)
Who are the best subjects for judgment tasks intended to test grammatical hypotheses? Michael Devitt ( [2006a] , [2006b] ) argues, on the basis of a hypothesis concerning the psychology of such judgments, that linguists themselves are. We present empirical evidence suggesting that the relevant divide is not between linguists and non-linguists, but between subjects with and without minimally sufficient task-specific knowledge. In particular, we show that subjects with at least some minimal exposure to or knowledge of such tasks tend (...) to perform consistently with one another—greater knowledge of linguistics makes no further difference—while at the same time exhibiting markedly greater in-group consistency than those who have no previous exposure to or knowledge of such tasks and their goals. (shrink)
This paper motivates two bases for ascribing propositional semantic knowledge (or something knowledgelike): first, because it’s necessary to rationalize linguistic action; and, second, because it’s part of an empirical theory that would explain various aspects of linguistic behavior. The semantic knowledge ascribed on these two bases seems to differ in content, epistemic status, and cognitive role. This raises the question: how are they related, if at all? The bulk of the paper addresses this question. It distinguishes a variety of answers (...) and their varying philosophical and empirical commitments. (shrink)
Donald Davidson aims to illuminate the concept of meaning by asking: What knowledge would suffice to put one in a position to understand the speech of another, and what evidence sufficiently distant from the concepts to be illuminated could in principle ground such knowledge? Davidson answers: knowledge of an appropriate truth-theory for the speaker’s language, grounded in what sentences the speaker holds true, or prefers true, in what circumstances. In support of this answer, he both outlines such a truth-theory for (...) a substantial fragment of a natural language and sketches a procedure—radical interpretation—that, drawing on such evidence, could confirm such a theory. Bracketing refinements (e.g., those introduced to.. (shrink)
In this note, I clarify the point of my paper “The Nature of Semantics: On Jackendoff’s Arguments” (NS) in light of Ray Jackendoff’s comments in his “Linguistics in Cognitive Science: The State of the Art.” Along the way, I amplify my remarks on unification.
This essay challenges those strains of contemporary social theory that regard romantic/sexual intimacy as a premier site of detraditionalization in the late modern era. Striking changes have occurred in intimacy and family life over the last half-century, but the notion of detraditionalization as currently formulated does not capture them very well. With the goal of achieving a more refined understanding, the article proposes a distinction between "regulative" and "meaning-constitutive" traditions. The former involve threats of exclusion from various moral communities; the (...) latter involve linguistic and cultural frameworks within which sense is made of the world. Focusing on the U.S. case and marshaling various kinds of empirical evidence, the article argues that while the regulative tradition of what it terms lifelong, internally stratified marriage has declined in strength in recent years, the image of the form of couplehood inscribed in this regulative tradition continues to function as a hegemonic ideal in many American intimate relationships. Intimacy in the United States also remains beholden to the tradition of romantic love. That these meaning-constitutive traditions continue to play a central role in structuring contemporary intimacy suggests that detraditionalization involves the relative decline only of certain regulative traditions, a point that calls into question some of the normative assessments that often accompany the detraditionalization thesis. (shrink)
Jackendoff defends a mentalist approach to semantics that investigates con- ceptual structures in the mind/brain and their interfaces with other structures, including specifically linguistic structures responsible for syntactic and phono- logical competence. He contrasts this approach with one that seeks to charac- terize the intentional relations between expressions and objects in the world. The latter, he argues, cannot be reconciled with mentalism. He objects in par- ticular that intentionality cannot be naturalized and that the relevant notion of object is suspect. (...) I critically discuss these objections, arguing in part that Jackendoff’s position rests on questionable philosophical assumptions. (shrink)
When a debate seems intractable, with little agreement as to how one might proceed towards a resolution, it is understandable that philosophers should consider whether something might be amiss with the debate itself. Famously in the last century, philosophers of various stripes explored in various ways the possibility that at least certain philosophical debates are in some manner deficient in sense. Such moves are no longer so much in vogue. For one thing, the particular ways they have been made have (...) themselves undergone much critical scrutiny, so that many philosophers now feel that there is, for example, a Quinean response to Carnap, a Gricean reply to Austin, and a diluting proliferation of Wittgenstein interpretations.2 Be that as it may,3 there do of.. (shrink)
There is nothing in [the six chapters that make up the body of Articulating Reasons] that will come as a surprise to anyone who has mastered [Making It Explicit]. … I had in mind audiences that had perhaps not so much as dipped into the big book but were curious about its themes and philosophical consequences. (35–36).
According to cognitivist truth-theoretic accounts of semantic competence, aspects of our linguistic behavior can be explained by ascribing to speakers cognition of truth theories. It's generally assumed on this approach that, however much context sensitivity speakers' languages contain, the cognized truththeories themselves can be adequately characterized context insensitively—that is, without using in the metalanguage expressions whose semantic value can vary across occasions of utterance. In this paper, I explore some of the motivations for and problems and consequences of dropping this (...) assumption. (shrink)
There is a long tradition of drawing metaphysical conclusions from investigations into language. This paper concerns one contemporary variation on this theme: the alleged ontological significance of cognitivist truth-theoretic accounts of semantic competence. According to such accounts, human speakers’ linguistic behavior is in part empirically explained by their cognizing a truth-theory. Such a theory consists of a finite number of axioms assigning semantic values to lexical items, a finite number of axioms assigning semantic values to complex expressions on the basis (...) of their structure and the semantic values of their constituents, and a finite number of production schemata. The theory enables the derivation of truth-conditions for each sentence of the language: something of roughly the form ‘S is true iff P’.1 The claim that speakers stand in a cognitive relation to such theories is advanced, not as a conceptual analysis of semantic competence or understanding, but rather as an empirical hypothesis about human speakers in particular, one part of a broader empirical account of our linguistic competence and cognition generally. It therefore must mesh with the rest of our theorizing in these areas and whatever relevant data from neighboring inquiries there may be. (For example, since it’s hypothesized that ‘S’ in the schema above should be replaced by a certain sort of syntactic representation of the sentence, syntactic evidence can bear on the semantic theory and vice versa.) The precise nature of the cognitive relation a speaker is supposed to bear to a truththeory is a matter of some dispute. I speak of ‘‘cognizing’’ (following.. (shrink)
The claims are grounded in a wealth of fascinating data, particularly on primate and young child communication and social cognition, much produced by Tomasello’s own lab. But there is certainly no dearth of stimulating speculation. Tomasello’s story is rich and complex. In what follows, I focus on aspects of the three hypotheses listed above, offering some commentary as I go.
Aesthetics is today widely seen as the philosophy of art and/or beauty, limited to artworks and their perception. In this paper, I will argue that today's aesthetics and the original programme developed by the German Enlightenment thinker Alexander Gottlieb Baumgarten in the first half of the eighteenth century have only the name in common. Baumgarten did not primarily develop his aesthetics as a philosophy of art. The making and understanding of artworks had served in his original programme only as an (...) example for the application of his philosophy. What he really attempts to present is an alternative philosophy of knowledge that goes beyond the purely rationalist, empiricist, and sensualist approaches. In short, Baumgarten transcends the old opposition between rationalism and sensualism. His core theme is the improvement (perfectio) of human knowledge and cognition and the ways to reach this goal. The study of Baumgarten's foundational works on aesthetics should not be undertaken merely out of antiquarian interest. I will argue, instead, that Baumgarten's importance and contemporary relevance lies in this: that his Aesthetica may serve as a profound contribution to the philosophy of the cultural sciences and humanities. Revisiting Baumgarten's original idea of aesthetics will lead us to a more inclusive concept of that philosophical discipline. (shrink)
Bayesians take “definite” or “single-case” probabilities to be basic. Definite probabilities attach to closed formulas or propositions. We write them here using small caps: PROB(P) and PROB(P/Q). Most objective probability theories begin instead with “indefinite” or “general” probabilities (sometimes called “statistical probabilities”). Indefinite probabilities attach to open formulas or propositions. We write indefinite probabilities using lower case “prob” and free variables: prob(Bx/Ax). The indefinite probability of an A being a B is not about any particular A, but rather about the (...) property of being an A. In this respect, its logical form is the same as that of relative frequencies. For instance, we might talk about the probability of a human baby being female. That probability is about human babies in general — not about individuals. If we examine a baby and determine conclusively that she is female, then the definite probability of her being female is 1, but that does not alter the indefinite probability of human babies in general being female. Most objective approaches to probability tie probabilities to relative frequencies in some way, and the resulting probabilities have the same logical form as the relative frequencies. That is, they are indefinite probabilities. The simplest theories identify indefinite probabilities with relative frequencies.3 It is often objected that such “finite frequency theories” are inadequate because our probability judgments often diverge from relative frequencies. For example, we can talk about a coin being fair (and so the indefinite probability of a flip landing heads is 0.5) even when it is flipped only once and then destroyed (in which case the relative frequency is either 1 or 0). For understanding such indefinite probabilities, it has been suggested that we need a notion of probability that talks about possible instances of properties as well as actual instances.. (shrink)
Drawing upon research in philosophical logic, linguistics and cognitive science, this study explores how our ability to use and understand language depends upon our capacity to keep track of complex features of the contexts in which we converse.
Social scientists have traditionally attempted to avoid extending strategies for acquiring experimental knowledge to the sphere of the social. Bruno Latour, however, has introduced a notion of the collective experiment, an experiment conducted by and with us all. In this short paper I seek to explore, by way of elucidating the talk of collective experiments, that Latour's notion has long since existed in the theory and practice of ecological design and restoration. Practitioners in ecological restoration projects find themselves in a (...) situation of double contingency, since neither do they know how nature will respond to their intervention nor is their interpretation of these responses already certain. Experimental practice in society then becomes the proceduralization of this contingency. (shrink)
Linguists often advert to what are sometimes called linguistic intuitions. These intuitions and the uses to which they are put give rise to a variety of philosophically interesting questions: What are linguistic intuitions – for example, what kind of attitude or mental state is involved? Why do they have evidential force and how might this force be underwritten by their causal etiology? What light might their causal etiology shed on questions of cognitive architecture – for example, as a case study (...) of how consciously inaccessible subpersonal processes give rise to conscious states, or as a candidate example of cognitive penetrability? What methodological issues arise concerning how linguistic intuitions are gathered and interpreted – for example, might some subjects' intuitions be more reliable than others? And what bearing might all this have on philosophers' own appeals to intuitions? This paper surveys and critically discusses leading answers to these questions. In particular, we defend a ‘mentalist’ conception of linguistics and the role of linguistic intuitions therein. (shrink)
Should a theory of meaning state what sentences mean, and can a Davidsonian theory of meaning in particular do so? Max Kölbel answers both questions affirmatively. I argue, however, that the phenomena of non-homophony, non-truth-conditional aspects of meaning, semantic mood, and context-sensitivity provide prima facie obstacles for extending Davidsonian truth-theories to yield meaning-stating theorems. Assessing some natural moves in reply requires a more fully developed conception of the task of such theories than Kölbel provides. A more developed conception is also (...) required to defend his positive answer to the first question above. I argue that, however Kölbel might elaborate his position, it can’t be by embracing the sort of cognitivist account of Davidsonian semantics to which he sometimes alludes. (shrink)
Fiona Cowie’s _What’s Within_ consists of three parts. In the first, she examines the early modern rationalist- empiricist debate over nativism, isolating what she considers the two substantive “strands” (67)1 that truly separated them: whether there exist domain-specific learning mechanisms, and whether concept acquisition is amenable to naturalistic explanation. She then turns, in the book’s succeeding parts, to where things stand today with these issues. The second part argues that Jerry Fodor’s view of concepts is continuous with traditional nativism in (...) that it precludes a naturalistic story of concept acquisition. Cowie objects, however, to Fodor’s path to this conclusion and thus sees no reason to endorse it. The third part assesses Chomskyan nativism as a contemporary instance of positing domain- specific learning mechanisms. Though she is highly critical of how “poverty of the stimulus” arguments and the like have been used to lend credence to stronger conclusions, she holds that such arguments do indeed support the nativist’s domain-specificity claim. Cowie’s reconsideration of nativism thus limits itself to concepts and language (a few exceptions aside: there are two brief forays into face recognition and a mention of pathogen response). The terrain she does cover, however, is vast; and Cowie’s illuminating discussions will stimulate anyone interested in the area. As I focus on a few large-scale qualms in what follows, let me mention in particular that much of what is of interest in Cowie’s book is to be found in her detailed consideration of specific arguments. (shrink)
Michael Tye responds to the problem of higher-order vagueness for his trivalent semantics by maintaining that truth-value predicates are “vaguely vague”: it’s indeterminate, on his view, whether they have borderline cases and therefore indeterminate whether every sentence is true, false, or indefinite. Rosanna Keefe objects (1) that Tye’s argument for this claim tacitly assumes that every sentence is true, false, or indefinite, and (2) that the conclusion is any case not viable. I argue – contra (1) – that Tye’s argument (...) needn’t make that assumption. A version of her objection is in fact better directed against other arguments Tye advances, though Tye can absorb this criticism without abandoning his position’s core. On the other hand, Keefe’s second objection does hit the mark: embracing ‘vaguely vague’ truth-value predicates undermines Tye’s ability to support validity claims needed to defend his position. To see this, however, we must develop Keefe’s remarks further than she does. (shrink)
In their important book, Causation in the Law, H. L. A. Hart and Tony Honore argue that causation in the law is based on causation outside the law, that the causal principles the courts rely on to determine legal responsibility are based on distinctions exercised in ordinary causal judgments. A distinction that particularly concerns them is one that divides factors that are necessary or sine qua non for an effect into those that count as causes for purposes of legal responsibility (...) and those that do not. Hart and Honore claim that this distinction is often one of fact rather than of legal policy, and that the factual basis is to be found in the ordinary distinction we draw between causes and 'mere conditions'. If this claim is correct, we may hope to illuminate the legal distinction by articulating the principles behind the ordinary one. This is a challenging task since, as in the case of most cognitive skills, we are far better at making particular judgments than we are at stating the general principles that underlie them. Hart and Honore devote the first part of their book to this difficult task. We have, then, two large projects. One is to articulate our ordinary notion of causation, especially the distinction between cause and mere condition. This is the project of constructing an 'ordinary model'. The other is to argue for what we may call the 'shared concept claim', the claim that the concept of legal cause is based on the ordinary notion of causation, that 'causal judgments, though the law may have to systematize them, are not specifically legal. They appeal to a notion which is part of everyday life' (1985, p. lv; all references to follow are from this edition). This essay will focus on Hart and Honore's ordinary model, rather than on their shared concept claim. In my judgment, Hart and Honore's case for some version of the shared concept claim is strong, so they are right to maintain that a better understanding of our ordinary notion of.. (shrink)
This article attempts to understand Emile Durkheim's 1913-14 lectures on pragmatism and sociology by situating them in the socio-intellectual context of the time. An analysis of books and journal articles from the period reveals that the ideas of the Anglo-American pragmatic philosophers Charles Peirce, William James, John Dewey, and F.C.S. Schiller were very popular in pre-World War I France. The French term le pragmatisme, however, was used to refer not only to the thought of these philosophers, but also to the (...) work of French thinkers, such as Henri Bergson and the Catholic Modernists Maurice Blondel and Edouard Le Roy, who wrote extensively about human action. Pragmatism, because of its associations with Bergsonian spiritualism and the theology of the Modernists, came to have religious connotations for many French intellectuals. Durkheim had a similar understanding of pragmatism and his critique of the pragmatists cannot be fully grasped unless these religious connotations are considered. The article concludes by discussing several implications of this interpretation for sociological theory. (shrink)
It is generally assumed that we are justified in punishing criminals because they have committed a morally wrongful act. Determining when criminal liability should be imposed calls for a moral assessment of the conduct in question, with criminal liability tracking as closely as possible the contours of morality. Versions of this view are frequently argued for in philosophical accounts of crime and punishment, and seem to be presumed by lawyers and policy makers working in the criminal justice system. -/- Challenging (...) such assumptions, this book considers the dominant justifications of punishment and subjects them to a piercing moral critique. It argues that none overcome the objection that people who are convicted of a serious crime and sent to prison have their basic human rights violated. The institution of criminal punishment is shown to be a regrettable necessity not deserving of the moral enthusiasm it enjoys among many politicians and the popular press. From a moral point of view, punishment is entitled at best to grudging toleration. -/- In the course of developing the argument, the book introduces the principal issues of criminal law theory with the aim of presenting a morally enlightened perspective on crimes and why we punish them. Enforcement of the law by police, prosecutors, and courts is a matter of concern for political morality, and the principal practices of the criminal justice system are subjected to moral scrutiny. The book offers an engaging, provocative introduction to thinking about the philosophy of crime and punishment, challenging students and other readers to think about whether we are justified in punishing wrongdoers. (shrink)
Responsible citizens are expected to combine ethical judgement with judiciously exercised social activism to preserve the moral foundation of democratic society and prevent political injustice. But do they? Utilizing a research model integrating insights from rational choice theory and cognitive developmental psychology this book carefully explores three exemplary cases of morally inspired activism: Jewish rescue in wartime Europe, abortion politics in the United States, and peace and settler activism in Israel. From all three analyses a single conclusion emerges: the most (...) politically competent individuals are, most often, the least morally competent. This is the central paradox of political morality. These findings cast doubt on strong models of political morality characterized by enlightened moral reasoning and concerted political action while affirming alternative weak models that fuse activism with sectarian moral interests. They provide empirical support to further upend the liberal vision of democratic character, education, and society. (shrink)
This paper examines Ian Hacking's arguments in favor of entity realism. It shows that his examples from science do not support his realism. Furthermore, his proposed criterion of experimental use is neither sufficient nor necessary for conferring a privileged status on his preferred unobservables. Nonetheless his insight is genuine; it may be most profitably seen as part of a more general effort to create a space for a new form of scientific and philosophical certainty, one that does not require foundations.