Normative Externalism argues that it is not important that people live up to their own principles. What matters, in both ethics and epistemology, is that they live up to the correct principles: that they do the right thing, and that they believe rationally. This stance, that what matters are the correct principles, not one's own principles, has implications across ethics and epistemology. In ethics, it undermines the ideas that moral uncertainty should be treated just like factual uncertainty, that moral ignorance (...) frequently excuses moral wrongdoing, and that hypocrisy is a vice. In epistemology, it suggests we need new treatments of higher-order evidence, and of peer disagreement, and of circular reasoning, and the book suggests new approaches to each of these problems. Although the debates in ethics and in epistemology are often conducted separately, putting them in one place helps bring out their common themes. One common theme is that the view that one should live up to one's own principles looks less attractive when people have terrible principles, or when following their own principles would lead to riskier or more aggressive action than the correct principles. Another common theme is that asking people to live up to their principles leads to regresses. It can be hard to know what action or belief complies with one's principles. And now we can ask, in such a case should a person do what they think their principles require, or what their principles actually require? Both answers lead to problems, and the best way to avoid these problems is to simply say people should follow the correct principles. (shrink)
A very simple contextualist treatment of a sentence containing an epistemic modal, e.g. a might be F, is that it is true iff for all the contextually salient community knows, a is F. It is widely agreed that the simple theory will not work in some cases, but the counterexamples produced so far seem amenable to a more complicated contextualist theory. We argue, however, that no contextualist theory can capture the evaluations speakers naturally make of sentences containing epistemic modals. If (...) we want to respect these evaluations, our best option is a relativist theory of epistemic modals. On a relativist theory, an utterance of a might be F can be true relative to one context of evaluation and false relative to another. We argue that such a theory does better than any rival approach at capturing all the behaviour of epistemic modals. (shrink)
Intuitively, Gettier cases are instances of justified true beliefs that are not cases of knowledge. Should we therefore conclude that knowledge is not justified true belief? Only if we have reason to trust intuition here. But intuitions are unreliable in a wide range of cases. And it can be argued that the Gettier intuitions have a greater resemblance to unreliable intuitions than to reliable intuitions. Whats distinctive about the faulty intuitions, I argue, is that respecting them would mean abandoning a (...) simple, systematic and largely successful theory in favour of a complicated, disjunctive and idiosyncratic theory. So maybe respecting the Gettier intuitions was the wrong reaction, we should instead have been explaining why we are all so easily misled by these kinds of cases. (shrink)
I defend normative externalism from the objection that it cannot account for the wrongfulness of moral recklessness. The defence is fairly simple—there is no wrong of moral recklessness. There is an intuitive argument by analogy that there should be a wrong of moral recklessness, and the bulk of the paper consists of a response to this analogy. A central part of my response is that if people were motivated to avoid moral recklessness, they would have to have an unpleasant sort (...) of motivation, what Michael Smith calls “moral fetishism”. (shrink)
In his Principles of Philosophy, Descartes says, Finally, it is so manifest that we possess a free will, capable of giving or withholding its assent, that this truth must be reckoned among the first and most common notions which are born with us.
I consider the problem of how to derive what an agent believes from their credence function and utility function. I argue the best solution of this problem is pragmatic, i.e. it is sensitive to the kinds of choices actually facing the agent. I further argue that this explains why our notion of justified belief appears to be pragmatic, as is argued e.g. by Fantl and McGrath. The notion of epistemic justification is not really a pragmatic notion, but it is being (...) applied to a pragmatically defined concept, i.e. belief. (shrink)
We have some of our properties purely in virtue of the way we are. (Our mass is an example.) We have other properties in virtue of the way we interact with the world. (Our weight is an example.) The former are the intrinsic properties, the latter are the extrinsic properties. This seems to be an intuitive enough distinction to grasp, and hence the intuitive distinction has made its way into many discussions in philosophy, including discussions in ethics, philosophy of mind, (...) metaphysics, epistemology and philosophy of physics. Unfortunately, when we look more closely at the intuitive distinction, we find reason to suspect that it conflates a few related distinctions, and that each of these distinctions is somewhat resistant to analysis. (shrink)
There is a lot that we don't know. That means that there are a lot of possibilities that are, epistemically speaking, open. For instance, we don't know whether it rained in Seattle yesterday. So, for us at least, there is an epistemic possibility where it rained in Seattle yesterday, and one where it did not. What are these epistemic possibilities? They do not match up with metaphysical possibilities - there are various cases where something is epistemically possible but not metaphysically (...) possible, and vice versa. How do we understand the semantics of statements of epistemic modality? The ten new essays in this volume explore various answers to these questions, including those offered by contextualism, relativism, and expressivism. (shrink)
Intelligent activity requires the use of various intellectual skills. While these skills are connected to knowledge, they should not be identified with knowledge. There are realistic examples where the skills in question come apart from knowledge. That is, there are realistic cases of knowledge without skill, and of skill without knowledge. Whether a person is intelligent depends, in part, on whether they have these skills. Whether a particular action is intelligent depends, in part, on whether it was produced by an (...) exercise of skill. These claims promote a picture of intelligence that is in tension with a strongly intellectualist picture, though they are not in tension with a number of prominent claims recently made by intellectualists. (shrink)
In previous work I’ve defended an interest-relative theory of belief. This paper continues the defence. It has four aims. -/- 1. To offer a new kind of reason for being unsatis ed with the simple Lockean reduction of belief to credence. 2. To defend the legitimacy of appealing to credences in a theory of belief. 3. To illustrate the importance of theoretical, as well as practical, interests in an interest-relative account of belief. 4. To revise my account to cover propositions (...) that are practically and theoretically irrelevant to the agent. (shrink)
Conciliatory theories of disagreement face a revenge problem; they cannot be coherently believed by one who thinks they have peers who are not conciliationists. I argue that this is a deep problem for conciliationism.
I set out and defend a view on indicative conditionals that I call “indexical relativism ”. The core of the view is that which proposition is expressed by an utterance of a conditional is a function of the speaker’s context and the assessor’s context. This implies a kind of relativism, namely that a single utterance may be correctly assessed as true by one assessor and false by another.
Many writers have held that in his later work, David Lewis adopted a theory of predicate meaning such that the meaning of a predicate is the most natural property that is (mostly) consistent with the way the predicate is used. That orthodox interpretation is shared by both supporters and critics of Lewis's theory of meaning, but it has recently been strongly criticised by Wolfgang Schwarz. In this paper, I accept many of Schwarze's criticisms of the orthodox interpretation, and add some (...) more. But I also argue that the orthodox interpretation has a grain of truth in it, and seeing that helps us appreciate the strength of Lewis's late theory of meaning. (shrink)
In “Against Arguments from Reference” (Mallon et al., 2009), Ron Mallon, Edouard Machery, Shaun Nichols, and Stephen Stich (hereafter, MMNS) argue that recent experiments concerning reference undermine various philosophical arguments that presuppose the correctness of the causal-historical theory of reference. We will argue three things in reply. First, the experiments in question—concerning Kripke’s Gödel/Schmidt example—don’t really speak to the dispute between descriptivism and the causal-historical theory; though the two theories are empirically testable, we need to look at quite different data (...) than MMNS do to decide between them. Second, the Gödel/Schmidt example plays a different, and much smaller, role in Kripke’s argument for the causal-historical theory than MMNS assume. Finally, and relatedly, even if Kripke is wrong about the Gödel/Schmidt example—indeed, even if the causal-historical theory is not the correct theory of names for some human languages—that does not, contrary to MMNS’s claim, undermine uses of the causalhistorical theory in philosophical research projects. (shrink)
Authors have a lot of leeway with regard to what they can make true in their story. In general, if the author says that p is true in the fiction we’re reading, we believe that p is true in that fiction. And if we’re playing along with the fictional game, we imagine that, along with everything else in the story, p is true. But there are exceptions to these general principles. Many authors, most notably Kendall Walton and Tamar Szabó Gendler, (...) have discussed apparent counterexamples when p is “morally deviant”. Many other statements that are conceptually impossible also seem to be counterexamples. In this paper I do four things. I survey the range of counterexamples, or at least putative counterexamples, to the principles. Then I look to explanations of the counterexamples. I argue, following Gendler, that the explanation cannot simply be that morally deviant claims are impossible. I argue that the distinctive attitudes we have towards moral propositions cannot explain the counterexamples, since some of the examples don’t involve moral concepts. And I put forward a proposed explanation that turns on the role of ‘higher-level concepts’, concepts that if they are satisfied are satisfied in virtue of more fundamental facts about the world, in fiction, and in imagination. (shrink)
As anyone who has flown out of a cloud knows, the boundaries of a cloud are a lot less sharp up close than they can appear on the ground. Even when it seems clearly true that there is one, sharply bounded, cloud up there, really there are thousands of water droplets that are neither determinately part of the cloud, nor determinately outside it. Consider any object that consists of the core of the cloud, plus an arbitrary selection of these droplets. (...) It will look like a cloud, and circumstances permitting rain like a cloud, and generally has as good a claim to be a cloud as any other object in that part of the sky. But we cannot say every such object is a cloud, else there would be millions of clouds where it seemed like there was one. And what holds for clouds holds for anything whose boundaries look less clear the closer you look at it. And that includes just about every kind of object we normally think about, including humans. Although this seems to be a merely technical puzzle, even a triviality, a surprising range of proposed solutions has emerged, many of them mutually inconsistent. It is not even settled whether a solution should come from metaphysics, or from philosophy of language, or from logic. Here we survey the options, and provide several links to the many topics related to the Problem. (shrink)
Suppose a rational agent S has some evidence E that bears on p, and on that basis makes a judgment about p. For simplicity, we’ll normally assume that she judges that p, though we’re also interested in cases where the agent makes other judgments, such as that p is probable, or that p is well-supported by the evidence. We’ll also assume, again for simplicity, that the agent knows that E is the basis for her judgment. Finally, we’ll assume that the (...) judgment is a rational one to make, though we won’t assume the agent knows this. Indeed, whether the agent can always know that she’s making a rational judgment when in fact she is will be of central importance in some of the debates that follow. (shrink)
Over the last two decades, William Lycan’s work on the semantics of conditionals has been distinguished by his careful attention to the connection between syntax and semantics, and more generally by his impeccable methodology. Lycan takes compositionality seriously, so he requires that the meaning of compound expressions like ‘even if’ be a combination of the constituent expressions, here ‘even’ and ‘if’. After reading his work, it’s hard to take seriously work that does not share this methodology.
Three objections have recently been levelled at the analysis of intrinsicness offered by Rae Langton and David Lewis. While these objections do seem telling against the particular theory Langton and Lewis offer, they do not threaten the broader strategy Langton and Lewis adopt: defining intrinsicness in terms of combinatorial features of properties. I show how to amend their theory to overcome the objections without abandoning the strategy.
This paper presents a new theory of the truth conditions for indicative conditionals. The theory allows us to give a fairly unified account of the semantics for indicative and subjunctive conditionals, though there remains a distinction between the two classes. Put simply, the idea behind the theory is that the distinction between the indicative and the subjunctive parallels the distinction between the necessary and the a priori. Since that distinction is best understood formally using the resources of two-dimensional modal logic, (...) those resources will be brought to bear on the logic of conditionals. (shrink)
Timothy Williamson has recently argued that few mental states are luminous , meaning that to be in that state is to be in a position to know that you are in the state. His argument rests on the plausible principle that beliefs only count as knowledge if they are safely true. That is, any belief that could easily have been false is not a piece of knowledge. I argue that the form of the safety rule Williamson uses is inappropriate, and (...) the correct safety rule might not conflict with luminosity. (shrink)
Recently four different papers have suggested that the supervaluational solution to the Problem of the Many is flawed. Stephen Schiffer (1998, 2000a, 2000b) has argued that the theory cannot account for reports of speech involving vague singular terms. Vann McGee and Brian McLaughlin (2000) say that theory cannot, yet, account for vague singular beliefs. Neil McKinnon (2002) has argued that we cannot provide a plausible theory of when precisifications are acceptable, which the supervaluational theory needs. And Roy Sorensen (2000) argues (...) that supervaluationism is inconsistent with a directly referential theory of names. McGee and McLaughlin see the problem they raise as a cause for further research, but the other authors all take the problems they raise to provide sufficient reasons to jettison supervaluationism. I will argue that none of these problems provide such a reason, though the arguments are valuable critiques. In many cases, we must make some adjustments to the supervaluational theory to meet the posed challenges. The goal of this paper is to make those adjustments, and meet the challenges. (shrink)
I defend interest-relative invariantism from a number of recent attacks. One common thread to my response is that interest-relative invariantism is a muchweaker thesis than is often acknowledged, and a number of the attacks only challenge very specific, and I think implausible, versions of it. Another is that a number of the attacks fail to acknowledge how many things we have independent reason to believe knowledge is sensitive to. Whether there is a defeater for someone's knowledge can be sensitive to (...) all manner of features of their environment, as the host of examples from the post-Gettier literature shows. Adding in interest-sensitive defeaters is a much less radical move than most critics claim it is. (shrink)
We argue against the knowledge rule of assertion, and in favour of integrating the account of assertion more tightly with our best theories of evidence and action. We think that the knowledge rule has an incredible consequence when it comes to practical deliberation, that it can be right for a person to do something that she can't properly assert she can do. We develop some vignettes that show how this is possible, and how odd this consequence is. We then argue (...) that these vignettes point towards alternate rules that tie assertion to sufficient evidence-responsiveness or to proper action. These rules have many of the virtues that are commonly claimed for the knowledge rule, but lack the knowledge rule's problematic consequences when it comes to assertions about what to do. (shrink)
In a recent article, Adam Elga outlines a strategy for “Defeating Dr Evil with Self-Locating Belief”. The strategy relies on an indifference principle that is not up to the task. In general, there are two things to dislike about indifference principles: adopting one normally means confusing risk for uncertainty, and they tend to lead to incoherent views in some ‘paradoxical’ situations. I argue that both kinds of objection can be levelled against Elga’s indifference principle. There are also some difficulties with (...) the concept of evidence that Elga uses, and these create further difficulties for the principle. (shrink)
What the world needs now is another theory of vagueness. Not because the old theories are useless. Quite the contrary, the old theories provide many of the materials we need to construct the truest theory of vagueness ever seen. The theory shall be similar in motivation to supervaluationism, but more akin to many-valued theories in conceptualisation. What I take from the many-valued theories is the idea that some sentences can be truer than others. But I say very different things to (...) the ordering over sentences this relation generates. I say it is not a linear ordering, so it cannot be represented by the real numbers. I also argue that since there is higher-order vagueness, any mapping between sentences and mathematical objects is bound to be inappropriate. This is no cause for regret; we can say all we want to say by using the comparative truer than without mapping it onto some mathematical objects. From supervaluationism I take the idea that we can keep classical logic without keeping the familiar bivalent semantics for classical logic. But my preservation of classical logic is more comprehensive than is normally permitted by supervaluationism, for I preserve classical inference rules as well as classical sequents. And I do this without relying on the concept of acceptable precisifications as an unexplained explainer. The world does not need another guide to varieties of theories of vagueness, especially since Timothy Williamson (1994) and Rosanna Keefe (2000) have already provided quite good guides. I assume throughout familiarity with popular theories of vagueness. (shrink)
We generalize the Kolmogorov axioms for probability calculus to obtain conditions defining, for any given logic, a class of probability functions relative to that logic, coinciding with the standard probability functions in the special case of classical logic but allowing consideration of other classes of "essentially Kolmogorovian" probability functions relative to other logics. We take a broad view of the Bayesian approach as dictating inter alia that from the perspective of a given logic, rational degrees of belief are those representable (...) by probability functions from the class appropriate to that logic. Classical Bayesianism, which fixes the logic as classical logic, is only one version of this general approach. Another, which we call Intuitionistic Bayesianism, selects intuitionistic logic as the preferred logic and the associated class of probability functions as the right class of candidate representions of epistemic states (rational allocations of degrees of belief). Various objections to classical Bayesianism are, we argue, best met by passing to intuitionistic Bayesianism—in which the probability functions are taken relative to intuitionistic logic—rather than by adopting a radically non-Kolmogorovian, for example, nonadditive, conception of (or substitute for) probability functions, in spite of the popularity of the latter response among those who have raised these objections. The interest of intuitionistic Bayesianism is further enhanced by the availability of a Dutch Book argument justifying the selection of intuitionistic probability functions as guides to rational betting behavior when due consideration is paid to the fact that bets are settled only when/if the outcome bet on becomes known. (shrink)
Gordon Belot has recently developed a novel argument against Bayesianism. He shows that there is an interesting class of problems that, intuitively, no rational belief forming method is likely to get right. But a Bayesian agent’s credence, before the problem starts, that she will get the problem right has to be 1. This is an implausible kind of immodesty on the part of Bayesians. My aim is to show that while this is a good argument against traditional, precise Bayesians, the (...) argument doesn’t neatly extend to imprecise Bayesians. As such, Belot’s argument is a reason to prefer imprecise Bayesianism to precise Bayesianism. (shrink)
Michael Strevens’s book Depth is a great achievement.1 To say anything interesting, useful and true about explanation requires taking on fundamental issues in the metaphysics and epistemology of science. So this book not only tells us a lot about scientiﬁc explanation, it has a lot to say about causation, lawhood, probability and the relation between the physical and the special sciences. It should be read by anyone interested in any of those questions, which includes presumably the vast majority of readers (...) of this journal. One of its many virtues is that it lets us see more clearly what questions about explanation, causation, lawhood and so on need answering, and frames those questions in perspicuous ways. I’m going to focus on one of these questions, what I’ll call the Goldilocks problem. As it turns out, I’m not going to agree with all the details of Strevens’s answer to this problem, though I suspect that something like his answer is right. At least, I hope something like his answer is right; if it isn’t, I’m not sure where else we can look. (shrink)
The Sleeping Beauty puzzle provides a nice illustration of the approach to self-locating belief defended by Robert Stalnaker in Our Knowledge of the Internal World (Stalnaker, 2008), as well as a test of the utility of that method. The setup of the Sleeping Beauty puzzle is by now fairly familiar. On Sunday Sleeping Beauty is told the rules of the game, and a (known to be) fair coin is ﬂipped. On Monday, Sleeping Beauty is woken, and then put back to (...) sleep. If, and only if, the coin landed tails, she is woken again on Tuesday after having her memory of the Monday awakening erased.1 On Wednesday she is woken again and the game ends. There are a few questions we can ask about Beauty’s attitudes as the game progresses. We’d like to know what her credence that the coin landed heads should be (a) Before she goes to sleep Sunday; (b) When she wakes on Monday; (c) When she wakes on Tuesday; and (d) When she wakes on Wednesday? Standard treatments of the Sleeping Beauty puzzle ignore (d), run together (b) and (c) into one (somewhat ill-formed) question, and then divide theorists into ‘halfers’ or ‘thirders’ depending on how they answer it. Following Stalnaker, I’m going to focus on (b) here, though I’ll have a little to say about (c) and (d) as well. I’ll be following orthodoxy in taking 1 2 to be the clear answer to (a), and in taking the correct answers to (b) and (c) to be independent of how the coin lands, though I’ll brieﬂy question that assumption at the end. An answer to these four questions should respect two different kinds of constraints. The answer for day n should make sense ‘statically’. It should be a sensible answer to the question of what Beauty should do given what information she then has. And the answer should make sense ‘dynamically’. It should be a sensible answer to the question of how Beauty should have updated her credences from some earlier day, given rational credences on the earlier day. As has been fairly clear since the discussion of the problem in Elga (2000), Sleeping Beauty is puzzling because static and dynamic considerations appear to push in different directions.. (shrink)
Uncertainty plays an important role in The General Theory, particularly in the theory of interest rates. Keynes did not provide a theory of uncertainty, but he did make some enlightening remarks about the direction he thought such a theory should take. I argue that some modern innovations in the theory of probability allow us to build a theory which captures these Keynesian insights. If this is the right theory, however, uncertainty cannot carry its weight in Keynes’s arguments. This does not (...) mean that the conclusions of these arguments are necessarily mistaken; in their best formulation they may succeed with merely an appeal to risk. (shrink)
There are many controversial theses about intrinsicness and duplication. The first aim of this paper is to introduce a puzzle that shows that two of the uncontroversial sounding ones can’t both be true. The second aim is to suggest that the best way out of the puzzle requires sharpening some distinctions that are too frequently blurred, and adopting a fairly radical reconception of the ways things are.
Recently, Timothy Williamson has argued that considerations about margins of errors can generate a new class of cases where agents have justified true beliefs without knowledge. I think this is a great argument, and it has a number of interesting philosophical conclusions. In this note I’m going to go over the assumptions of Williamson’s argument. I’m going to argue that the assumptions which generate the justification without knowledge are true. I’m then going to go over some of the recent arguments (...) in epistemology that are refuted by Williamson’s work. And I’m going to end with an admittedly inconclusive discussion of what we can know when using an imperfect measuring device. (shrink)
Nick Bostrom argues that if we accept some plausible assumptions about how the future will unfold, we should believe we are probably not humans. The argument appeals crucially to an indifference principle whose precise content is a little unclear. I set out four possible interpretations of the principle, none of which can be used to support Bostrom’s argument. On the first two interpretations the principle is false, on the third it does not entail the conclusion, and on the fourth it (...) only entails the conclusion given an auxiliary hypothesis that we have no reason to believe. (shrink)
Orthodox Bayesian decision theory requires an agent’s beliefs representable by a real-valued function, ideally a probability function. Many theorists have argued this is too restrictive; it can be perfectly reasonable to have indeterminate degrees of belief. So doxastic states are ideally representable by a set of probability functions. One consequence of this is that the expected value of a gamble will be imprecise. This paper looks at the attempts to extend Bayesian decision theory to deal with such cases, and concludes (...) that all proposals advanced thus far have been incoherent. A more modest, but coherent, alternative is proposed. Keywords: Imprecise probabilities, Arrow’s theorem. (shrink)
Data about attitude reports provide some of the most interesting arguments for, and against, various theses of semantic relativism. This paper is a short survey of three such arguments. First, I’ll argue (against recent work by von Fintel and Gillies) that relativists can explain the behaviour of relativistic terms in factive attitude reports. Second, I’ll argue (against Glanzberg) that looking at attitude reports suggests that relativists have a more plausible story to tell than contextualists about the division of labour between (...) semantics and meta-semantics. Finally, I’ll offer a new argument for invariantism (i.e. against both relativism and contextualism) about moral terms. The argument will turn on the observation that the behaviour of normative terms in factive and non-factive attitude reports is quite unlike the behaviour of any other plausibly context-sensitive term. Before that, I’ll start with some taxonomy, just so as it’s clear what the intended conclusions below are supposed to be. (shrink)
In “Now the French are invading England” (Analysis 62, 2002, pp. 34-41), Komarine Romdenh-Romluc offers a new theory of the relationship between recorded indexicals and their content. Romdenh-Romluc’s proposes that Kaplan’s basic idea, that reference is determined by applying a rule to a context, is correct, but we have to be careful about what the context is, since it is not always the context of utterance. A few well known examples illustrate this. The “here” and “now” in “I am not (...) here now” on an answering machine do not refer to the time and place of the original utterance, but to the time the message is played back, and the place its attached telephone is located. Any occurrence of “today” in a newspaper or magazine refers not to the day the story in which it appears was written, nor to the day the newspaper or magazine was printed, but to the cover date of that publication. Still, it is plausible that for each (token of an) indexical there is a salient context, and that “today” refers to the day of its context, “here” to the place of its context, and soon. Romdenh-Romluc takes this to be true, and then makes a proposal about what the salient context is. It is “the context that Ac would identify on the basis of cues that she would reasonably take U to be exploiting.” (39) Ac is the relevant audience, “the individual who it is reasonable to take the speaker to be addressing”, and who is assumed to be linguistically competent and attentive. (So Ac might not be the person U intends to address. This will not matter for what follows.) The proposal seems to suggest that it is impossible to trick a reasonably attentive hearer about what the referent of a particular indexical is. Since such trickery does seem possible, Romdenh-Romluc’s theory needs (at least) supplementation. I present two examples of such tricks. (shrink)
I argue that what evidence an agent has does not supervene on how she currently is. Agents do not always have to infer what the past was like from how things currently seem; sometimes the facts about the past are retained pieces of evidence that can be the start of reasoning. The main argument is a variant on Frank Arntzenius’s Shangri La example, an example that is often used to motivate the thought that evidence does supervene on current features.
Lloyd Humberstone’s recently published Philosophical Applications of Modal Logic presents a number of new ideas in modal logic as well explication and critique of recent work of many others. We extend some of these ideas and answer some questions that are left open in the book.
This paper is about three of the most prominent debates in modern epistemology. The conclusion is that three prima facie appealing positions in these debates cannot be held simultaneously. The first debate is scepticism vs anti-scepticism. My conclusions apply to most kinds of debates between sceptics and their opponents, but I will focus on the inductive sceptic, who claims we cannot come to know what will happen in the future by induction. This is a fairly weak kind of scepticism, and (...) I suspect many philosophers who are generally anti-sceptical are attracted by this kind of scepticism. Still, even this kind of scepticism is quite unintuitive. I’m pretty sure I know (1) on the basis of induction. (1) It will snow in Ithaca next winter. Although I am taking a very strong version of anti-scepticism to be intuitively true here, the points I make will generalise to most other versions of scepticism. (Focussing on the inductive sceptic avoids some potential complications that I will note as they arise.) The second debate is a version of rationalism vs empiricism. The kind of rationalist I have in mind accepts that some deeply contingent propositions can be known a priori, and the empiricist I have in mind denies this. Kripke showed that there are contingent propositions that can be known a priori. One example is Water is the watery stuff of our acquaintance. (‘Watery’ is David Chalmers’s nice term for the properties of water by which folk identify it.) All the examples Kripke gave are of propositions that are, to use Gareth Evans’s term, deeply necessary (Evans, 1979). It is a matter of controversy presently just how to analyse Evans’s concepts of deep necessity and contingency, but most of the controversies are over details that are not important right here. I’ll simply adopt Stephen Yablo’s recent suggestion: a proposition is deeply contingent if it could have turned out to be true, and could have turned out to be false (Yablo, 2002)1. Kripke did not provide examples of any deeply contingent propositions knowable a priori, though nothing he showed rules out their existence.. (shrink)
Applying good inductive rules inside the scope of suppositions leads to implausible results. I argue it is a mistake to think that inductive rules of inference behave anything like 'inference rules' in natural deduction systems. And this implies that it isn't always true that good arguments can be run 'off-line' to gain a priori knowledge of conditional conclusions.
John Burgess has recently argued that Timothy Williamson’s attempts to avoid the objection that his theory of vagueness is based on an untenable metaphysics of content are unsuccessful. Burgess’s arguments are important, and largely correct, but there is a mistake in the discussion of one of the key examples. In this note I provide some alternative examples and use them to repair the mistaken section of the argument.