N.B. Dr Bykvist is now based at the Faculty of Philosophy, University of Oxford. The full-text of this article is not currently available in ORA, but you may be able to access the article via the publisher copy link on this record page.
Many people claim that semantic content is normative, and that therefore naturalistic theories of content face a potentially insuperable difficulty. The normativity of content allegedly undermines naturalism by introducing a gap between semantic 'ought's and the explanatory resources of naturalism. I argue here that this problem is not ultimately pressing for naturalists. The normativity thesis, I maintain, is ambiguous; it could mean either that the content of a term prescribes a pattern of use, or that it merely determines which pattern (...) of use can be described as 'correct'. For the antinaturalist argument to go forward, content must be prescriptive. I argue, however, that it is not. Moreover, the thesis that content supplies standards for correct use is insufficient to supply a similar, a priori objection to naturalism. (shrink)
It is widely held, even among nonnaturalists, that the moral supervenes on the natural. This is to say that for any two metaphysically possible worlds w and w′, and for any entities x in w and y in w′, any isomorphism between x and y that preserves the natural properties preserves the moral properties. In this paper, I put forward a conceivability argument against moral supervenience, assuming non-naturalism. First, I argue that though utilitarianism may be true, and the trolley driver (...) is permitted to kill the one to save the five, there is a conceivable scenario that is just like our world in all natural respects, yet at which deontology is true, and the trolly driver is not permitted to kill the one to save the five. I then argue that in the special case of morality, it is possible to infer from the conceivability of such a scenario to its possibility. It follows that supervenience is false. (shrink)
In this paper, I take issue with a core commitment of logical conventionalism: that we impose a logic on ourselves by adopting general linguistic conventions governing our use of logical terms, thereby determining the meanings of the logical constants and which of our inferences are valid. Drawing on Kripke’s ‘adoption problem’, I argue that general logical principles cannot be adopted, either explicitly or implicitly. I go on to argue that the meanings of our logical terms, and the validity of our (...) inferences, cannot depend on our adoption of logico-linguistic conventions. (shrink)
It is highly now intuitive that the future is open and the past is closed now—whereas it is unsettled whether there will be a fourth world war, it is settled that there was a first. Recently, it has become increasingly popular to claim that the intuitive openness of the future implies that contingent statements about the future, such as ‘There will be a sea battle tomorrow,’ are non-bivalent (neither true nor false). In this paper, we argue that the non-bivalence of (...) future contingents is at odds with our pre-theoretic intuitions about the openness of the future. These intuitions are revealed by our pragmatic judgments concerning the correctness and incorrectness of assertions of future contingents. We argue that the pragmatic data together with a plausible account of assertion shows that in many cases we take future contingents to be true (or to be false), though we take the future to be open in relevant respects. It follows that appeals to intuition to support the non-bivalence of future contingents are untenable. Intuition favours bivalence. (shrink)
In Making it Explicit, Brandom aims to articulate an account of conceptual content that accommodates its normativity—a requirement on theories of content that Brandom traces to Wittgenstein's rule following considerations. It is widely held that the normativity requirement cannot be met, or at least not with ease, because theories of content face an intractable dilemma. Brandom proposes to evade the dilemma by adopting a middle road—one that uses normative vocabulary, but treats norms as implicit in practices. I argue that this (...) proposal fails to evade the dilemma, as Brandom himself understands it. Despite his use of normative vocabulary, Brandom's theory fares no better than the reductionist theories he criticises. I consider some responses that Brandom might make to my charges, and finally conclude that his proposal founders on his own criteria. (shrink)
1. IntroductionA considerable number of philosophers maintain that meaning is intrinsically normative. In this journal, Daniel Whiting has defended the normativity of meaning against some of my recent objections . 1 This paper responds to Whiting's arguments.
It is disputed what norm, if any, governs assertion. We address this question by looking at assertions of future contingents: statements about the future that are neither metaphysically necessary nor metaphysically impossible. Many philosophers think that future contingents are not truth apt, which together with a Truth Norm or a Knowledge Norm of assertion implies that assertions of these future contingents are systematically infelicitous. In this article, we argue that our practice of asserting future contingents is incompatible with the view (...) that they are not truth apt. We consider a range of norms of assertion and argue that the best explanation of the data is provided by the view that assertion is governed by the Knowledge Norm. (shrink)
This paper takes issue with an influential interpretationist argument for physicalism about intentionality based on the possibility of radical interpretation. The interpretationist defends the physicalist thesis that the intentional truths supervene on the physical truths by arguing that it is possible for a radical interpreter, who knows all of the physical truths, to work out the intentional truths about what an arbitrary agent believes, desires, and means without recourse to any further empirical information. One of the most compelling arguments for (...) the possibility of radical interpretation, associated most closely with David Lewis and Donald Davidson, gives a central role to decision theoretic representation theorems, which demonstrate that if an agent’s preferences satisfy certain constraints, it is possible to deduce probability and utility functions that represent her beliefs and desires. We argue that an interpretationist who wants to rely on existing representation theorems in defence of the possibility of radical interpretation faces a trilemma, each horn of which is incompatible with the possibility of radical interpretation. (shrink)
It is frequently said that belief aims at truth, in an explicitly normative sense—that is, that one ought to believe the proposition that p if, and only if, p is true. This truth norm is frequently invoked to explain why we should seek evidential justification in our beliefs, or why we should try to be rational in our belief formation—it is because we ought to believe the truth that we ought to follow the evidence in belief revision. In this paper, (...) I argue that this view is untenable. The truth norm clashes with plausible evidential norms in a wide range of cases, such as when we have excellent but misleading evidence for a falsehood or no evidence for a truth. I will consider various ways to resolve this conflict and argue that none of them work. However, I will ultimately attempt to vindicate the love of truth, by arguing that knowledge is the proper epistemic goal. The upshot is that we should not aim merely to believe the truth; we should aim to know it.Keywords: Belief; Knowledge; Normativity; Rationality; Truth. (shrink)
ABSTRACTIn a recent paper, Alexander Greenberg defends a truth norm of belief according to which if one has some doxastic attitude towards p, one ought to believe that p if and only if p is true. He responds, in particular, to the ‘blindspot’ objection to truth norms such as da: in the face of true blindspots, such as it is raining and nobody believes that it is raining, truth norms such as da are unsatisfiable; they entail that one ought to (...) believe p, but if one does believe p, they entail that it is not the case that one ought to believe p. In this paper, it is argued that Greenberg’s response to the blindspot objection is unsatisfactory. (shrink)
This chapter investigates the view that meaning is normative. Meaning is understood here in a broad sense to include such semantic properties as sense, reference, truth‐conditions, content, and the like. Normativity can either be viewed as a property of representations or as a feature of the world. The view that meaning involves rule‐following or a normative judgment of some kind is untenable, and in any case, has no bearing on the hard problem of intentionality. However, the view that meaning is (...) a source of normativity is implausible, and there is little evidence that known difficulties with the reductive analysis of meaning can be resolved by adding normativity to the explanation. Finally, the chapter reviews Gibbard's recent suggestion that the concept meaning is normative is implausible and his proposed expressivist resolution of the hard problem of intentionality appears to be untenable. (shrink)
This note addresses two of Gibbard's central contentions in Meaning and Normativity: first, that the concept of meaning is normative, and second, that an expressivist account of semantic concepts and statements can shed light on the hard problem of intentionality, the problem of explaining intentionality in naturalistic terms.
John MacFarlane has recently argued that his brand of truth relativism provides the best solution to the puzzle of future contingents: assertions about the future that express propositions that are metaphysically neither necessary nor impossible. In this paper, we show that even if we grant all of the metaphysical, semantic and pragmatic assumptions in terms of which MacFarlane sets and aims to solve the puzzle, his truth relativism is not apt to solve the problem of future contingents. We argue that (...) the theory fails to vindicate the intuition that future contingent propositions are neither true nor false, leaving the theory open to a charge of Reductio. We show that these problems cannot be answered while preserving the core tenets of truth relativism. (shrink)
John MacFarlane has recently argued that his brand of truth relativism – Assessment Sensitivity – provides the best solution to the puzzle of future contingents: statements about the future that are metaphysically neither necessary nor impossible. In this paper, we show that even if we grant all of the metaphysical, semantic and pragmatic assumptions in terms of which MacFarlane sets and solves the puzzle, Assessment Sensitivity is ultimately self-refuting.
In his recent book, Meaning and Normativity, Allan Gibbard argues at length that meta-ethical expressivism can be profitably extended to semantic and intentional language: meta-linguistic discourse about meaning, reference, content, and the like. This chapter argues that the extension of expressivism to semantic discourse is unprofitable and—worse still—in a certain sense selfundermining. It is unprofitable because it sheds no light on the problem of intentionality; and it undermines itself because many of the sentences that make up the expressivist’s theory are (...) semantic sentences, and if these are understood to express non-cognitive attitudes of some kind, the expressivist’s explanations are spurious. (shrink)
The Rules of Thought, by Jonathan Jenkins Ichikawa and Benjamin Jarvis, is a dense and ambitious book whose principal aim is to defend the view that philosophical inquiry is a priori inquiry into essential natures. The book covers a broad range of philosophical issues spanning the philosophy of mind and language, the epistemology of metaphysical modality and the philosophy of philosophy. It will be of considerable interest to many, since there is something in it for just about everyone. That said, (...) the authors do not do as much as one might like to make their views accessible to the uninitiated or convincing to the unconverted. (shrink)
In Unbelievable Errors, Bart Streumer defends the error theory by rejecting all competitors to it. My aim here is to defend one brand of realism from Streumer’s objections: primitivim. The primitivist holds that there exist sui generis normative properties that do not supervene on any descriptive properties. It is argued that Streumer’s objections to primitivism can be met.
This chapter discusses the application of formal methods from social choice theory to the metasemantic question of whether radical interpretation is possible. Radical interpretation involves deducing semantic truths from non-semantic truths by appeal to certain a priori principles or criteria, such as the principle of charity. A familiar view is that the intended interpretation is the one that best meets a combination of constraints. It is suggested that this situation can be modelled as follows: each constraint determines a binary relation (...) on the set X of interpretations that is transitive and complete. The radical interpreter’s task is to determine an overall ordering as a function from the profile of individual orderings. The application of Arrow’s theorem in this context is discussed. (shrink)