"Semantic Minimalists" holds that there are virtually no semantically context sensitive expressions in English. In particular, they claim that the semantics for terms like "red", "tall", "ready", "every", or "know" are not (contrary to many popular semantic theories) context sensitive. While minimalism strikes many as obviously false, it will be argued here that the view is more plausible than commonly assumed if one accepts the 'normative' conception of the relation between meaning and use characteristic of the literature on semantic externalism.
Our ascriptions of content to utterances in the past attribute to them a level of determinacy that extends beyond what could supervene upon the usage up to the time of those utterances. If one accepts the truth of such ascriptions, one can either (1) argue that future use must be added to the supervenience base that determines meaning, or (2) argue that such cases show that meaning does not supervene upon use at all. The following will argue against authors such (...) as Lance, Hawthorn and Ebbs that first of these options is the more promising of the two. However, maintaining the supervenience thesis ultimately requires that that the doctrine that use determines meaning be understood as 'normative' in two important ways. The first (more familiar) way is that the function from use to meaning must be of a sort that allows us to maintain a robust distinction between correct usage and actual usage. This first type of normativity is accepted by defenders of many more temporally restricted versions of the supervenience thesis, but the second sort of normativity is unique to theories that extend the supervenience base into the future. In particular, if meaning is partially a function of future use, we can understand other commitments we are often taken to have about meaning, particularly the commitment to meaning being 'determinate', as practical commitments that structure our linguistic practices rather than theoretical commitment that merely describe such practices. (shrink)
Hilary Putnam has famously argued that we can know that we are not brains in a vat because the hypothesis that we are is self-refuting.1 While Putnam’s argument has generated interest primarily as a novel response to skepticism, his original use of the brain in a vat scenario was meant to illustrate a point about the “mind/world relationship.”2 In particular, he intended it to be part of an argument against the coherence of metaphysical realism, and thus to be part of (...) a defense of his conception of truth as idealized rational acceptability. Putnam’s argument has drawn a good deal of criticism already, but it will be argued here that these criticisms fail to capture the central problem with Putnam’s argument. Putnam’s conclusions about the self refuting character of the brain in a vat hypothesis, rather than simply being a consequence of his semantic externalism, will be shown to be actually out of line with central and plausible aspects of his own account of the relationship between our minds and the world. Reflections on intentionality and semantics ultimately give us no compelling reason to suppose that the beliefs expressed by claims like “I am a brain in a vat” could not be true,3 but (pace Putnam) this supports neither skepticism nor metaphysical realism. (shrink)
Ascriptions of content are sensitive not only to our physical and social environment, but also to unforeseeable developments in the subsequent usage of our terms. The paper argues that the problems that may seem to come from endorsing such 'temporally sensitive' ascriptions either already follow from accepting the socially and historically sensitive ascriptions Burge and Kripke appeal to, or disappear when the view is developed in detail. If one accepts that one's society's past and current usage contributes to what one's (...) terms mean, there is little reason not to let its future usage to do so as well. (shrink)
It has frequently been suggested that meaning is, in some important sense, normative. However, precisely what is particularly normative about it is often left without any satisfactory explanation, and the ‘normativity thesis’ has thus, justly, been called into question. That said, it will be argued here that the intuition that meaning is ‘normative’ is on the right track, even if many of the purported explanations for meaning’s normativity are not. In particular, rather that being particularly social, the normativity of meaning (...) may follow from the more logical/epistemic relations between use and meaning. Because of this, some use-based theories we still be able to accommodate the normativity of meaning by allowing that while meaning supervenes upon use, the function from use to meaning is a normative one. (shrink)
Davidson argues that there can’t be type-type identities between metal and physical events because: (a) if there were such identities, then there would be lawlike statements relating mental and physical events, and (b) there can be no such lawlike statements. According to Davidson, there can be no lawlike connections between the mental and the physical because of the “disparate commitments”3 of the two realms. Davidson’s argument for this claim can be schematized very roughly as follows: 1. The application of mental (...) predicates is constrained by the constitutive ideal of rationality. 2. The application of physical predicates is not constrained in this way. . 3. Therefore, there can be no lawlike statements relating the two sorts of predicate. According to Davidson, if we are to ascribe propositional attitudes such as beliefs and desires to people at all, we are committed to finding them to be rational. As Davidson puts it “[n]othing a person could say or do would count as good enough grounds for the attribution of a straightforwardly and obviously contradictory belief.”4 If someone were treated as having such manifestly contradictory beliefs, the fault would lie with the interpretation of the person’s.. (shrink)
While holism and atomism are often treated as mutually exclusive approaches to semantic theory, the apparent tension between the two usually results from running together distinct levels of semantic explanation. In particular, there is no reason why one can’t combine an atomistic conception of what the semantic values of our words are (one’s “descriptive semantics”), with a holistic explanation of why they have those values (one’s “foundational semantics”). Most objections to holism can be shown to apply only to holistic version (...) of descriptive semantics, and do not tell against any sorts of holistic foundational semantics. As Davidson’s work will be used to illustrate, by clearly distinguishing foundational and descriptive semantics, one can capture the most appealing features of both holism and atomism. (shrink)
This paper is concerned with Davidson's argument that very general properties of the theory of interpretation make the skeptical claim that most of our beliefs could turn out to be false insupportable. Conceived as a 'straight' answer to the skeptic Davidson's argument is not especially convincing. In particular, Davidson's answer to the skeptic presupposes a framework that allows for a new and seemingly more radical skepticism according to which we might not even have beliefs at all. Nevertheless, there is a (...) sense in which Davidson's account of content remaps the conceptual terrain in a fashion that absolves us of the need to rule out the scenarios the skeptic describes. The paper will both present the problems Davidson's position has as a 'straight' solution to skepticism, and discuss the way in which his externalism does weaken the strength of the skeptical challenge. (shrink)
Radical Interpretation and the Permutation Principle * Davidson has argued1 that there is no fact of the matter as to what any speaker's words refer to because, even holding truth conditions fixed, a radical interpreter will always be able to come up with many equally good interpretations of the interpretee’s language. This conclusion, which Davidson (following Quine) refers to as the "inscrutability of reference", has caused many to reject the radical interpretation methodology as fundamentally flawed.2 Nevertheless, it isn’t clear that (...) such widespread inscrutability is a necessary consequence of the radical interpretation methodology. In particular, Davidson claims that the following assumption (which will hereafter be referred to as the "Permutation Principle") is "clearly needed if we are to conclude to the inscrutability of reference". (shrink)
This paper will appeal a recent argument for the indeterminacy of translation to show not that meaning is indeterminate, but rather that assertion cannot be explained in terms of an independent grasp of the concept of truth. In particular, it will argue that if we try to explain assertion in terms of truth rather than vice versa, we ultimately will not be able to make sense of the difference between assertion and denial. This problem with such 'semantic' accounts of assertion (...) then illustrates why we need not worry about the purported argument for indeterminacy. (shrink)
Those sympathetic to the naturalistic side of James hope that his critique of ‘philosophical materialism’ can be separated from those elements of his thinking that are essential to his pragmatism. Such a separation is possible once we see that James’s critique of materialism grows out of his views about its incompatibility with the existence of objective values. Objective values (as James understands them) are incompatible, however, not with materialism in its most general form, but rather with materialism that understood the (...) ‘material world’ in terms of the sciences of the late nineteen hundreds. In particular, one could not defend the potential objectivity of value in the way that James hoped if one endorsed the particular ‘pessimistic’ cosmology characteristic of the sciences at the turn of the last century. Consequently, if one rejects certain ‘empirical assumptions’ associated with the science of James’s day, the possibility of a type of ‘melioristic materialism’ opens up, and this sort of materialist could still understand value in the way that James proposes. (shrink)
Many philosophers have suggested that belief predicates are ambiguous between a de dicto and a de re reading. However, the impression of ambiguity is a function of the narrow ranges of examples that philosophers focus on. When we consider our ascriptional practices as a whole, the suggestion that belief predicates are ambiguous is neither plausible nor needed to explain the de dicto/de re distinction. This paper will argue that understanding paradigmatic de dicto and de re ascriptions in terms of disavowals (...) from a more basic sort of ascription is preferable to positing an ambiguity in which each of the two sorts of ascription are conceptually primitive. (shrink)
This paper argues that, according to James, we are committed to their being a kind of stable consensus, and we are committed to its being one that we can recognize ourselves in, but by underwriting such regulative ideals through a ‘will to believe’ rather than a transcendental argument, we make our commitment to their being an end of inquiry a practical rather than theoretical one. Objectivity is something we are committed to making, not something that we are committed to their (...) already being out there to find. There is thus no limit we are approaching that is independent of our approach. Pragmatism is thus a position between Realism and Subjectivism because it takes it as unsettled which story will ultimately hold for us. Subjectivism may reign even after we do our best, but we might be able to do better, and if we can, it is incumbent upon us to do so. (shrink)
While engaged in the analysis of philosophically central concepts, analytic philosophers have traditionally relied extensively on their own intuitions about when such concepts can be correctly applied. Intuitions have, however, come under increasingly critical scrutiny of late, and if they turned out not to be a reliable tool for the proper analysis of our concepts, then a radical reworking of analytic philosophy’s methodology would be in order. One inﬂuential line of criticism against the use of intuition argues that they only (...) tell us about our conceptions of things, and not the things themselves. This venerable line of criticism can seem considerably strengthened if one endorses ‘‘externalist’’ accounts of meaning. Nevertheless, the move from semantic externalism to the rejection of intuitions will be shown to be illegitimate if one has a constitutive rather than expressive understanding of the relation between our intuitions and our concepts. (shrink)
A belief ascription such as “Oedipus believes that his mother is the queen of Thebes” can be understood in two ways, one in which it seems true, and another in which it seems false. It can seem true because the woman who was, in fact, Oedipus’ mother was believed by him to be the queen of Thebes. It can seem false because Oedipus himself would have sincerely denied that Jocasta could be correctly characterized as “Oedipus’s mother.” Belief ascriptions thus seem (...) to admit of two interpretations, and this has suggested to many that belief predicates such as “________ believes that his mother is the queen of Thebes” are ambiguous between a de dicto and a de re reading.1 However, the impression of ambiguity is a function of the narrow ranges of examples that philosophers focus on. When we consider our ascriptional practices as a whole, the suggestion that belief predicates are ambiguous is neither plausible nor needed to explain the de dicto/de re distinction. The following will argue that understanding paradigmatic de dicto and de re ascriptions in terms of disavowals from a more basic sort of ascription is preferable to positing a simple ambiguity in which each of the two sorts of ascription are conceptually primitive. (shrink)
Logic is viewed by many as inseparable from rationality, and James’ ‘rejection of logic’ in A Pluralistic Universe has been viewed as the most flagrantly ‘irrational’ strand in his philosophy.2 Indeed, the late date of A Pluralistic Universe (the lectures were given in 1908) may tempt some to write it off as inessential to James’ larger philosophical vision. Nevertheless, James’ views on logic flow from currents that run deep in his philosophy, and these ‘naturalistic’ currents can be traced back to (...) works as early as 1879’s “The Sentiment of Rationality.” These currents are crucial to understanding James’ later work, since when viewed in light of the psychological naturalism developed in The Principles of Psychology, James’ so-called ‘rejection of logic’ can seem both plausible and, crucially, rational. (shrink)
James was always interested in the problem of how our thoughts come to be about the world. Nevertheless, if one takes James to be trying to provide necessary and sufficient conditions for a thought's being about an object, counterexamples to his account will be embarrassingly easy to find. James, however, was not aiming for this sort of analysis of intentionality. Rather than trying to provide necessary and sufficient conditions for every case of a thought's being about an object, James focused (...) his analysis on the prototypical/paradigm cases. This analysis of the core could then be supplemented with additional remarks about how the less prototypical cases could be understood in terms of their relations to (and similarities with) the paradigm. It is argued that this type of analysis is psychologically well motivated, and makes James account surprisingly plausible. (shrink)
Logic is viewed by many as inseparable from rationality, and James' 'rejection of logic' in A Pluralistic Universe may be the most flagrantly 'irrational' strand in his philosophy. Nevertheless, when viewed in the context of the psychological naturalism developed in The Principles of Psychology, James' 'rejection of logic' can seem both plausible and, crucially, rational. James' rejection of conceptual logic is deeply connected to his naturalism about concepts, and his belief that there is no reason to think that an intellect (...) "built up of practical interests" need develop concepts that accurately mirror the structure of reality. James position is, then, not so much that we should give up logic, but rather that (given the practical rather than theoretical nature of our concepts) we should give up the assumption that we are rationally obligated to accept all the apparent logical consequences of all the claims that we accept. (shrink)
This paper discusses the relationship between the views of James and Royce on representation and their attempts to explain the "possibility of error," views which are, I argue, closer than many have thought. Appreciating where they do differ will point not only to an unstressed problem with Royces' argument for the Absolute but also to some unappreciated features of how James' account of truth ties in with his account of epistemic justification.
William James has been characterized as “the major whipping boy of the later Wittgenstein,” and the currency of this impression of the relation between James and Wittgenstein is understandable. Reading Wittgenstein and his commentators can leave one with the impression that James was a badly muddled “exponent of the tradition in the philosophy of mind that [Wittgenstein] was opposing.” There have been recent attempts to resist this trend, but even these tend to focus on the affinities between the two philosophers, (...) still accepting the prevailing view that Wittgenstein was often critical of James, and that in such cases Wittgenstein was always right and James was always wrong. By contrast, by focusing on Wittgenstein’s discussion of James’s “if-feeling”, it will be argued that Wittgenstein’s criticisms of James are often not as damaging, or even as extensive, as has often been assumed. (shrink)
While philosophers of language have traditionally relied upon their intuitions about cases when developing theories of reference, this methodology has recently been attacked on the grounds that intuitions about reference, far from being universal, show significant cultural variation, thus undermining their relevance for semantic theory. I’ll attempt to demonstrate that (1) such criticisms do not, in fact, undermine the traditional philosophical methodology, and (2) our underlying intuitions about the nature of reference may be more universal than the authors suppose.
A brief (10,000 word) introduction to James's philosophy with particular focus on the relation between James's naturalism and his account of various normative notions like rationality, goodness and truth.
While engaged in the analysis of topics such as the nature of knowledge, meaning, or justice, analytic philosophers have traditionally relied extensively on their own intuitions about when the relevant terms can, and can't, be correctly applied. Consequently, if intuitions about possible cases turned out not to be a reliable tool for the proper analysis of philosophically central concepts, then a radical reworking of philosophy's (or at least analytic philosophy's) methodology would seem to be in order. It is thus not (...) surprising that the increasingly critical scrutiny that intuitions have received of late has produced what has been referred to as a "crisis" in analytic philosophy. This paper will argue, however, that at least those criticisms that stem from recent work on semantic externalism are not as serious as their proponents have claimed. Indeed, this paper will argue while the conceptual intuitions (and the analyses that result from them) will have to be recognized as fallible, they still have a prima facie claim to correctness. A naturalistic and externalistic account of concepts thus merely requires that the methodology of conceptual analysis be reinterpreted (from a 'Platonic' to a 'constructive' model) rather than given up. (shrink)
Temporal externalists argue that ascriptions of thought and utterance content can legitimately re?ect contingent conceptual developments that are only settled after the time of utterance. While the view has been criticized for failing to accord with our.
Temporal externalists argue that ascriptions of thought and utterance content can legitimately reflect contingent conceptual developments that are only settled after the time of utterance. While the view has been criticized for failing to accord with our.
This paper will argue that while traditional accounts of word meaning have problems accounting for how the referent of a non-ambiguous/non-indexical term can shift from context to context, a moderate version of semantic holism can do so by understanding the comparative weight of the extension-determining beliefs as itself something which can vary from context to context. The view will then be used to give an account of some of the more problematic cases in the literature associated with semantic externalism.
'Epistemic' theories of vagueness notoriously claim that (despite the appearances to the contrary) all of our vague terms have sharp boundaries, it's just that we can't know what they are. Epistemic theories are typically criticized for failing to explain (1) the source of the ignorance postulated, and (2) how our terms could come to have such precise boundaries. Both of these objections will, however, be shown to rest on certain 'presentist' assumptions about the relation between use and meaning, and if (...) allows that the meaning constitutive elements of our linguistic practices can extend into the future, the possibility of a new sort of 'normative epistemicism' emerges. (shrink)
The purpose of this paper is to motivate and defend a recognizable version of N. L. Wilson's "Principle of Charity" Doing so will involve: (1) distinguishing it fromthe significantly different versions of the Principle familiar through the work of Quine and Davidson; (2) showing that it is compatible with, among other things, both semantic externalism and "simulation" accounts of interpretation; and (3) explaining how it follows from plausible constraints relating to the connection between interpretation and self-interpretation. Finally, it will be (...) argued that Charity represents a type of "minimal individualism" that is closely tied to first person authority, and that endorsing Charity in our interpretations of others reflects a commitment to capturing, from the third-person starting point, their first-personal point of view. (shrink)
This paper discusses an "expressive constraint" on accounts of thought and language which requires that when a speaker expresses a belief by sincerely uttering a sentence, the utterance and the belief have the same content. It will be argued that this constraint should be viewed as expressing a conceptual connection between thought and language rather than a mere empirical generalization about the two. However, the most obvious accounts of the relation between thought and language compatible with the constraint (giving an (...) independent account of one of either linguistic meaning or thought content and understanding the other in terms of it) both face serious difficulties. Because of this, the following will suggest an alternative picture of the relation between thought and language that remains compatible with the constraint. (shrink)
Semantic holists view what one's terms mean as function of all of one's usage. Holists will thus be coherentists about semantic justification: showing that one's usage of a term is semantically justified involves showing how it coheres with the rest of one's usage. Semantic atomists, by contrast, understand semantic justification in a foundationalist fashion. Saul Kripke has, on Wittgenstein's behalf, famously argued for a type of skepticism about meaning and semantic justification. However, Kripke's argument has bite only if one understands (...) semantic justification in foundationalist terms. Consequently, Kripke's arguments lead not to a type of skepticism about meaning, but rather to the conclusion that one should be a coherentist about semantic justification, and thus a holist about semantic facts. (shrink)
This paper examines popular 'conventionalist' explanations of why philosophers need not back up their claims about how 'we' use our words with empirical studies of actual usage. It argues that such explanations are incompatible with a number of currently popular and plausible assumptions about language's 'social' character. Alternate explanations of the philosopher's purported entitlement to make a priori claims about 'our' usage are then suggested. While these alternate explanations would, unlike the conventionalist ones, be compatible with the more social picture (...) of language, they are each shown to face serious problems of their own. (shrink)
Hillary Putnam has famously argued that we can know that we are not brains in a vat because the hypothesis that we are is self-refuting. While Putnam's argument has generated interest primarily as a novel response to skepticism, his original use of the brain in a vat scenario was meant to illustrate a point about the "mind/world relationship." In particular, he intended it to be part of an argument against the coherence of metaphysical realism, and thus to be part of (...) a defense of his conception of truth as idealized rational acceptability. Putnam's conclusions about the scenario are, however, actually out of line with central and plausible aspects of his own account of the relationship between our minds and the world. Reflections on semantics give us no compelling reason to suppose that claims like "I am a brain in a vat" could not turn out to be true. (shrink)
This paper argues that Davidson's claim that the connection between belief and the "constitutive ideal of rationality" precludes the possibility of any type-type identities between mental and physical events relies on blurring the distinction between two ways of understanding this "constitutive ideal", and that no consistent understanding the constitutive ideal allows it to play the dialectical role Davidson intends for it.
John Haugeland has recently attempted to provide a naturalistic account of intentionality that explains how we can (collectively) misidentify objects in the world in terms of the interplay of two types of 'recognitional' skill. Nevertheless, it is argued here that his inegalitarian conception of the two sorts of skill leaves him with a quasi-conventionalist account of our relation to the world which lacks the more robust sort of objectivity that a more holistic theory could provide.
It has become increasingly popular to suggest that non-individualistic theories of content undermine our purported a priori knowledge of such contents because they entail that we lack the ability to distinguish our thoughts from alternative thoughts with different contents. However, problems relating to such knowledge of 'comparative' content tell just as much against individualism as non-individualism. Indeed, the problems presented by individualistic theories of content for self-knowledge are at least, if not more, serious than those presented by non-individualistic theories. Consequently, (...) considerations of self-knowledge give one no reason to embrace individualism. If anything, they give one reason to reject it. (shrink)