This paper was written for a workshop on ethics and epistemology at Missouri. I use an example from unpublished work with Ishani Maitra to develop a new kind of argument for expressivism. (I don’t endorse the argument, but I think it is interesting.) Roughly, the argument is that knowledge is a norm governing assertions, but moral claims do not have to be known to be properly made, so to make a moral claim is not to make an assertion. Some suggestions (...) are made for how a non-expressivist might avoid the argument. (shrink)
The only part of the Patient Protection and Affordable Care Act (hereafter, ‘the ACA’) struck down in National Federation of Independent Business (NFIB) et al. v. Sebelius, Secretary of Health and Human Services, et al. was a provision expanding Medicaid. We will argue that this was a mistake; the provision should not have been struck down. We’ll do this by identifying a test that C.J. Roberts used to justify his view that this provision was unconstitutional. We’ll defend that test against (...) some objections raised by J. Ginsburg. We’ll then go on to argue that, properly applied, that test establishes the constitutionality of the Medicaid provision. (shrink)
Timothy Williamson has argued that our evidence is what we know. This implies that anything we come to know by inference instantly becomes part of our evidence, and that all of our evidence is true. I argue that neither of these implications is correct. I conclude by noting a rival theory of evidence, one based on a suggestion Jerry Fodor makes in The Modularity of Mind , is not vulnerable to the criticisms I make of Williamson, nor to the criticisms (...) he makes of traditional theories of evidence. (shrink)
Traditional philosophy talks a lot about beliefs. Modern philosophy talks a lot about degrees of belief. Are these two concepts related? We suggest they are: X believes that p iff X 's degree of belief is one. We offer a contextualist account of belief to handle the most obvious counterexamples.
It’s a good time to be doing history of late analytic philosophy. We get to be part of the birth of a new field of philosophy. Some may see this as a much needed gap in the literature. Indeed, there are a couple of reasons for scepticism about the very field, both of which are plausible but wrong. One reason is that it is too recent. But it can’t be too recent for general historical study; there are courses in history (...) departments on September 11, so it’s not like looking at philosophy from thirty to forty years ago is rushing in where historians fear to tread. And indeed, if logical positivism could be treated historically in the 1960s, and ordinary language philosophy could be treated historically at the turn of the century, it seems a reasonable time to look back at the important works of the 1970s that established the contemporary era in philosophy. (shrink)
Ernest Adams has claimed that a probabilistic account of validity gives the best account of our intuitive judgements about the validity of arguments. In particular, he claims, it has the best hope of accounting for our judgements about many arguments involving conditionals. Most of the examples in the literature on this topic have been arguments framed in the language of propositional logic. I show that once we consider arguments involving predicates and involving identity, Adams’s strategy is less successful.
John Leslie's Doomsday argument uses the frequency interpretation of probability to argue that the end of the universe is closer than we might have thought. Oh well - all the worse for the frequency interpretation.
In recent years a number of authors have defended the interest-relativity of knowledge and justification. Views of this form are floated by John Hawthorne (2004), and endorsed by Jeremy Fantl and Matthew McGrath (2002; 2009), Jason Stanley (2005) and Brian Weatherson (2005). The various authors differ quite a lot in how much interest-relativity they allow, but what is common is the defence of interest-relativity. These views have, quite naturally, drawn a range of criticisms. The primary purpose of this paper is (...) to respond to these criticisms and, as it says on the tin, defend interest-relative invariantism, or IRI for short. But I don’t plan to defend every possible version of IRI, only a particular one. Most of the critics of IRI have assumed that it must have some or all of the following features. (shrink)
Suppose a rational agent S has some evidence E that bears on p, and on that basis makes a judgment about p. For simplicity, we’ll normally assume that she judges that p, though we’re also interested in cases where the agent makes other judgments, such as that p is probable, or that p is well-supported by the evidence. We’ll also assume, again for simplicity, that the agent knows that E is the basis for her judgment. Finally, we’ll assume that the (...) judgment is a rational one to make, though we won’t assume the agent knows this. Indeed, whether the agent can always know that she’s making a rational judgment when in fact she is will be of central importance in some of the debates that follow. (shrink)
This paper has three aims. First, I’ll argue that there’s no good reason to accept any kind of ‘easy knowledge’ objection to externalist foundationalism. It might be a little surprising that we can come to know that our perception is accurate by using our perception, but any attempt to argue this is impossible seems to rest on either false premises or fallacious reasoning. Second, there is something defective about using our perception to test whether our perception is working. What this (...) reveals is that there are things we aim for in testing other than knowing that the device being tested is working. I’ll suggest that testing aims for sensitive knowledge that the device is working. Testing a device, such as our perceptual system, by using its own outputs may deliver knowledge, but it can’t deliver sensitive knowledge. So it’s a bad way to test the system. The big conclusion here is that sensitivity is an important epistemic virtue, although it is not necessary for knowledge. Third, I’ll argue that the idea that sensitivity is an epistemic virtue can provide a solution to a tricky puzzle about inductive evidence. This provides another reason for thinking that the conclusion of section two is correct: not all epistemic virtues are to do with knowledge. (shrink)
We generalize the Kolmogorov axioms for probability calculus to obtain conditions defining, for any given logic, a class of probability functions relative to that logic, coinciding with the standard probability functions in the special case of classical logic but allowing consideration of other classes of “essentially Kolmogorovian” probability functions relative to other logics. We take a broad view of the Bayesian approach as dictating inter alia that from the perspective of a given logic, rational degrees of belief are those representable (...) by probability functions from the class appropriate to that logic. Classical Bayesianism, which fixes the logic as classical logic, is only one version of this general approach. Another, which we call Intuitionistic Bayesianism, selects intuitionistic logic as the preferred logic and the associated class of probability functions as the right class of candidate representions of epistemic states (rational allocations of degrees of belief). Various objections to classical Bayesianism are, we argue, best met by passing to intuitionistic Bayesianism – in which the probability functions are taken relative to intuitionistic logic – rather than by adopting a radically non-Kolmogorovian, e.g. non-additive, conception of (or substitute for) probability functions, in spite of the popularity of the latter response amongst those who have raised these objections. The interest of intuitionistic Bayesianism is further enhanced by the availability of a Dutch Book argument justifying the selection of intuitionistic probability functions as guides to rational betting behaviour when due consideration is paid to the fact that bets are settled only when/if the outcome betted on becomes known. (shrink)
You’re probably familiar with the following dialectic. We want there to be some systematic connection between credences and beliefs. At first blush, saying that a person believes p and has a very low credence in p isn’t just an accusation of irrationality, it is literally incoherent. The simplest such connection would be a reduction of beliefs to credences. But the simplest reductions don’t work. If we identify beliefs with credence 1, and take credences to support betting dispositions, then a rational (...) agent will have very few beliefs. There are lots of things that an agent, we would normally say, believes even though she wouldn’t bet on them at absurd odds. Note that this argument doesn’t rely on reducing credences to betting dispositions; as long as credences support the betting dispositions, the argument goes through. (shrink)
I argue that ordinary objects are fusions of past and present, but not future, temporal parts. This theory provides the neatest solution to some puzzles concerning intrinsic properties, and is supported by some surprising linguistic data. (This paper is probably inconsistent with some other papers I've written, but the line it runs is at least amusing and original.).
As with many aspects of David Lewis’s work, it is hard to provide a better summary of his views than he provided himself. So the following introduction to what the Humean Supervenience view is will follow the opening pages of Lewis (1994a) extremely closely. But for those readers who haven’t read that paper, here’s the nickel version.
An important tradition in metaphysics takes its job to be finding a limited number of ingredients with which we can tell the complete story of the world (or some subject matter). Physicalism, for example, claims that the list of ingredients sufficient to tell the complete story about the very small, or about the non-sentient, is sufficient to tell the complete story about all of the world. Some people take the moral of this kind of metaphysics to be eliminativist; that we (...) can tell the complete story of the world without meanings, or inflations, shows that meaning and inflation do not exist. Most people are not so blasé about rejecting commonsense opinions. Inflations, wars, rivers and beliefs all exist, but there is nothing but atoms in the void, so we must find a way of showing that the arrangement of atoms in the void makes true the stories about inflations and so on. (shrink)
Three recent books have argued that Keynes’s philosophy, like Wittgenstein’s, underwent a radical foundational shift. It is argued that Keynes, like Wittgenstein, moved from an atomic Cartesian individualism to a more conventionalist, intersubjective philosophy. It is sometimes argued this was caused by Wittgenstein’s concurrent conversion. Further, it is argued that recognising this shift is important for understanding Keynes’s later economics. In this paper I argue that the evidence adduced for these theses is insubstantial, and other available evidence contradicts their claims.
I’m not sure how much knowledge everyone already has, so I’d like to start with a little questionnaire. On a card, say for each of the following topics whether you’re familiar with the topic, have heard of it but aren’t familiar with it, or have never heard of it.
It is sometimes claimed (e.g., by Sider (2001a,b); Holton (2003); Stalnaker (2004); Williams (2007); Weatherson (2003, 2010)) that a theory of predicate meaning that assigns a central role to naturalness is either (a) Lewisian, (b) true, or (c) both. The theory in question is rarely developed in particularly great detail, but the rough intuitive idea is that the meaning of a predicate is the most natural property that is more-or-less consistent with the usage of the predicate. The point of this (...) note is to investigate whether a version of this idea could be true, and whether it could be properly attributed to Lewis. I’m going to mostly focus on the second question, but I think in such a way that light is shed on the ﬁrst question. To anticipate the answer a little, I’m going to say that whether the use plus naturalness theory is plausibly attributed to Lewis (and is plausibly true) depends on what we want a theory of (predicate) meaning to do. Here are two very distinct tasks we could be engaged in. First, we could be investigating the metaphysics of meaning, and so be interested in how it is that a pattern of animal noises can come to have any kind of content at all. Second, we could be investigating the meaning of some particular term, where substantive claims about the meanings of other terms are presupposed in our inquiry. Call the ﬁrst project metasemantics, and the second project applied semantics. I’m going to conclude that use plus naturalness is a plausible way to approach applied semantics. But it isn’t a great way to approach metasemantics. The problem is that once we crunch through the details, it’s impossible to disentangle a notion of “use” such that naturalness can be added to it to get a theory of meaning. Before we can get very far on any of these inquiries, we need to say a bit about what we mean by ‘naturalness’. Naturalness plays a lot of distinctive roles for Lewis. Some of these broadly metaphysical roles. These roles are the primary focus of (Lewis, 1983a).. (shrink)
Sameness and Substance Renewed (hereafter, 2001) is, in effect, a second edition of Wiggins’s 1980 book Sameness and Substance (hereafter, 1980), which in turn expanded and corrected some ideas in his 1967 Identity and Spatio-Temporal Continuity (hereafter, 1967). All three books have similar aims. The first is to argue, primarily against Geach, that identity is absolute not relative. The second is to argue that, despite this, whenever an identity claim a = b is true, there is a sortal f such (...) that a is the same f as b. The biggest difference between 1967 and the two later books is that the later books contain much more detail on what a sortal must be if this claim, called D, is to be both correct and philosophically interesting. The third aim is to apply the first two conclusions to the topic of personal identity. (shrink)
As anyone who has flown out of a cloud knows, the boundaries of a cloud are a lot less sharp up close than they can appear on the ground. Even when it seems clearly true that there is one, sharply bounded, cloud up there, really there are thousands of water droplets that are neither determinately part of the cloud, nor determinately outside it. Consider any object that consists of the core of the cloud, plus an arbitrary selection of these droplets. (...) It will look like a cloud, and circumstances permitting rain like a cloud, and generally has as good a claim to be a cloud as any other object in that part of the sky. But we cannot say every such object is a cloud, else there would be millions of clouds where it seemed like there was one. And what holds for clouds holds for anything whose boundaries look less clear the closer you look at it. And that includes just about every kind of object we normally think about, including humans. Although this seems to be a merely technical puzzle, even a triviality, a surprising range of proposed solutions has emerged, many of them mutually inconsistent. It is not even settled whether a solution should come from metaphysics, or from philosophy of language, or from logic. Here we survey the options, and provide several links to the many topics related to the Problem. (shrink)
The generality problem is a well-known problem for process reliabilist theories of justification. Here’s how the problem usually gets started. In the first instance, token processes of belief formation are not themselves reliable or unreliable. Rather, it is types of processes of belief formation that are reliable or unreliable. But any token process is an instance of many different types. And these types may differ in reliability.
Patrick Greenough has argued that a predicate is vague iff it is epistemically tolerant. I show that there are some counterexamples to this analysis, and that it rests on some fairly contentious theories about the behaviour of vague terms in propositional attitude reports.
One of the benefits of the 2D framework we looked at last week was that it explained how we could understand a sentence without knowing which proposition it expressed. And we could do this even if we give an account of understanding which is closely tied to the possible worlds semantics we use to analyse propositions. Really this can be done very easily, without appeal to any high-flying Kripkean cases. In “Analytic Metaphysics” Jackson discusses a very simple case of it. (...) I can understand an utterance of “I have a beard” without knowing which proposition it expresses. I know how the proposition is generated from context plus meaning, if X is the speaker then the sentence expresses the proposition X has a beard. And that is enough for understanding. But if I don’t know who said the sentence, so I don’t know who X is, I don’t know which proposition is expressed by that utterance. (shrink)
In philosophy itâ€™s hard to find a view that hasnâ€™t had an ism associated with it, but there are some. Some theories are too obscure or too fantastic to be named. And occasionally a theory is too deeply entrenched to even be conceptualised as a theory. For example, many of us hold without thinking about it the theory that â€œthe central function of language is to enable a speaker to reveal his or her thoughts to a hearer,â€ (3) that in (...) the case of declarative utterances the thoughts in question are beliefs whose content is some proposition or other, and that hearers figure out what the content of that belief is by virtue of an inference that turns on their beliefs about the meanings of the words we use. These claims might seem too trivial to even be called a theory. They have seemed too trivial to draw an ism. Christopher Gauker calls them â€™the received viewâ€™, and the purpose of his book Words without Meaning (all page references to this book) is to argue against this received view and propose an alternative theory in its place. In Gaukerâ€™s theory the primary function of language is social coordination. If language ever functions as a conduit to the mind, this is a secondary effect. (shrink)
In “A Reliabilist Solution to the Problem of Promiscuous Bootstrapping”, Hilary Kornblith (2009) proposes a reliabilist solution to the bootstrapping problem. I’m going to argue that Kornblith’s proposal, far from solving the bootstrapping problem, in fact makes the problem much harder for the reliabilist to solve. Indeed, I’m going to argue that Kornblith’s considerations give us a way to develop a quick reductio of a certain kind of reliabilism. Let’s start with a crude statement of the problem. The bootstrapper, call (...) them S, looks at a device D1 that happens to be reliable, though at this stage S doesn’t know this. We assume that S is a reliable reader of devices. S then draws the following conclusions. (shrink)
This collection arose out of a conference on intuitions at the University of Notre Dame in April 1996. The papers in it mainly address two related questions: (a) How much evidential weight should be assigned to intuitions? and (b) Are concepts governed by necessary and sufficient conditions, or are they governed by ‘family resemblance’ conditions, as Wittgenstein suggested? The book includes four papers by psychologists relating and analyzing some empirical findings concerning intuitions and eleven papers by philosophers endorsing various answers (...) to these questions. (shrink)
I argue with my friends a lot. That is, I offer them reasons to believe all sorts of philosophical conclusions. Sadly, despite the quality of my arguments, and despite their apparent intelligence, they don’t always agree. They keep insisting on principles in the face of my wittier and wittier counterexamples, and they keep offering their own dull alleged counterexamples to my clever principles. What is a philosopher to do in these circumstances? (And I don’t mean get better friends.) One popular (...) answer these days is that I should, to some extent, defer to my friends. If I look at a batch of reasons and conclude p, and my equally talented friend reaches an incompatible conclusion q, I should revise my opinion so I’m now undecided between p and q. I should, in the preferred lingo, assign equal weight to my view as to theirs. This is despite the fact that I’ve looked at their reasons for concluding q and found them wanting. If I hadn’t, I would have already concluded q. The mere fact that a friend (from now on I’ll leave off the qualifier ‘equally talented and informed’, since all my friends satisfy that) reaches a contrary opinion should be reason to move me. Such a position is defended by Richard Feldman (2006a, 2006b), David Christensen (2007) and Adam Elga (forthcoming). This equal weight view, hereafter EW, is itself a philosophical position. And while some of my friends believe it, some of my friends do not. (Nor, I should add for your benefit, do I.) This raises an odd little dilemma. If EW is correct, then the fact that my friends disagree about it means that I shouldn’t be particularly confident that it is true, since EW says that I shouldn’t be too confident about any position on which my friends disagree. But, as I’ll argue below, to consistently implement EW, I have to be maximally confident that it is true. So to accept EW, I have to inconsistently both be very confident that it is true and not very confident that it is true. This seems like a problem, and a reason to not accept EW.. (shrink)
Many epistemologists hold that an agent can come to justifiably believe that p is true by seeing that it appears that p is true, without having any antecedent reason to believe that visual impressions are generally reliable. Certain reliabilists think this, at least if the agent’s vision is generally reliable. And it is a central tenet of dogmatism (as described by Pryor (2000) and Pryor (2004)) that this is possible. Against these positions it has been argued (e.g. by Cohen (2005) (...) and White (2006)) that this violates some principles from probabilistic learning theory. To see the problem, let’s note what the dogmatist thinks we can learn by paying attention to how things appear. (The reliabilist says the same things, but we’ll focus on the dogmatist.) Suppose an agent receives an appearance that p, and comes to believe that p. Letting Ap be the proposition that it appears to the agent that p, and → be the material implication, we can say that the agent learns that p, and hence is in a position to infer Ap → p, once they receive the evidence Ap.1 This is surprising, because we can prove the following. (shrink)
Peter Walley argues that a vague credal state need not be representable by a set of probability functions that could represent precise credal states, because he believes that the members of the representor set need not be countably additive. I argue that the states he defends are in a way incoherent.
Orthodox Bayesian decision theory requires an agent’s beliefs representable by a real-valued function, ideally a probability function. Many theorists have argued this is too restrictive; it can be perfectly reasonable to have indeterminate degrees of belief. So doxastic states are ideally representable by a set of probability functions. One consequence of this is that the expected value of a gamble will be imprecise. This paper looks at the attempts to extend Bayesian decision theory to deal with such cases, and concludes (...) that all proposals advanced thus far have been incoherent. A more modest, but coherent, alternative is proposed. Keywords: Imprecise probabilities, Arrow’s theorem. (shrink)
This paper started life as a short note I wrote around New Year 2007 while in Minneapolis. It was originally intended as a blog post. That might explain, if not altogether excuse, the flippant tone in places. But it got a little long for a post, so I made it into the format of a paper and posted it to my website. The paper has received a lot of attention, so it seems like it will be helpful to see it (...) in print. Since a number of people have responded to the argument as stated, I’ve decided to just reprint the article warts and all, and make a few comments at the end about how I see its argument in the context of the subsequent debate. (shrink)
• Perceptual Evidence is Psychological My perceptual evidence consists in facts about the psychological states I am in when undergoing a perceptual experience. (If you don’t think that evidence is propositional, the evidence might be the states themeslves; I’m going to presuppose evidence is propositional, and factive, for this talk.) So, for instance, my perceptual evidence might include that I’m visually representing that there is a table in front of me.
This paper is part of a larger campaign against moderation in foundational epistemology. I think the only plausible responses to a kind of Humean sceptic are, radical responses. The Humean sceptic I have in mind tells us about a sceptical scenario, ss, where our evidence is just as it actually is, but some purported piece of knowledge of ours is false. The sceptic names the proposition You aren’t in ss as s, and calls on us to respond to the following (...) argument. (shrink)
In earlier work I argued that using ‘vague probabilities’ did not ground any argument for significantly adjusting Bayesian decision theory. In this note I show that my earlier arguments don’t carry across smoothly to game theory. Allowing agents to have vague probabilities over possible outcomes dramatically increases the range of possible Nash equilibria in certain games, and hence arguably (but only arguably) increases the range of possible rational action.
Here’s a fairly quick argument that there is contingent a priori knowledge. Assume there are some ampliative inference rules. Since the alternative appears to be inductive scepticism, this seems like a safe enough assumption. Such a rule will, since it is ampliative, licence some particular inference From A infer B where A does not entail B. That’s just what it is for the rule to be ampliative. Now run that rule inside suppositional reasoning. In particular, first assume A, then via (...) this rule infer B. Now do a step of →-introduction, inferring A → B and discharging the assumption A. Since A does not entail B, this will be contingent, and since it rests on a sound inference with no (undischarged) assumptions, it is a priori knowledge. This argument is hardly new. It is part of the argument in some recent papers promoting contingent a priori knowledge, such as Hawthorne (2002) and Weatherson (2005). But it is an intriguingly quick argument for a stunning philosophical conclusion, one that seems to rely on few dubious steps. I’m going to argue that it fails for a quite interesting reason. At least in natural deduction systems, some inferential rules (such as ∀-introduction) have restrictions on when they can be applied. I’m going to argue that ampliative reasoning rules cannot, in general, be applied inside the scope of suppositions, and that is why the above argument fails. I’ll argue for this conclusion by showing that a very weak ampliative rule leads, when combined with some other plausible principles, to absurd conclusions if it is applied inside the scope of suppositions. If even a weak ampliative rule cannot be used suppositionally, then it plausibly follows that no ampliative rule can be used suppositionally. The construction I’m going to use to show this is quite similar to one used by Sinan Dogramaci in his (forthcoming), though as we’ll see at the end Dogramaci and I have different views about what to take away from these arguments. Some people might think we have already seen an argument that ampliative inference rules fail in suppositional reasoning.. (shrink)
Dean Pettit recently argued in Mind that understanding a word did not require knowing what it meant. Adam and I show that his core arguments, which mostly turn on showing that some particular cases are cases of understanding without knowledge, do not work.
When you pick up a volume like this one, which describes itself as being about ‘knowledge ascriptions’, you probably expect to find it full of papers on epistemology, broadly construed. And you’d probably expect many of those papers to concern themselves with cases where the interests of various parties (ascribers, subjects of the ascriptions, etc.) change radically, and this affects the truth values of various ascriptions. And, at least in this paper, your expectations will be clearly met. But here’s an (...) interesting contrast. If you’d picked up a volume of papers on ‘belief ascriptions’, you’d expect to find a radically different menu of writers and subjects. You’d expect to find a lot of concern about names and demonstratives, and about how they can be used by people not entirely certain about their denotation. More generally, you’d expect to find less epistemology, and much more mind and language. I haven’t read all the companion papers to mine in this volume, but I bet you won’t find much of that here. This is perhaps unfortunate, since belief ascriptions and knowledge ascriptions raise at least some similar issues. Consider a kind of contextualism about belief ascriptions, which holds that (L) can be truly uttered in some contexts, but not in others, depending on just what aspects of Lois Lane’s psychology are relevant in the conversation.1 (L) Lois Lane believes that Clark Kent is vulnerable to kryptonite. We could imagine a theorist who says that whether (L) can be uttered truly depends on whether it matters to the conversation that Lois Lane might not recognise Clark Kent when he’s wearing his Superman uniform. And, this theorist might continue, this isn’t because ‘Clark Kent’ is a context-sensitive expression; it is rather because ‘believes’ is context-sensitive. Such a theorist will also, presumably, say that whether (K) can be uttered truly is context-sensitive. (K) Lois Lane knows that Clark Kent is vulnerable to kryptonite. And so, our theorist is a kind of contextualist about knowledge ascriptions.. (shrink)
Lewis Carroll’s 1895 paper “Achilles and the Tortoise” showed that we need a distinction between rules of inference and premises. We cannot, on pain of regress, treat all rules simply as further premises in an argument. But Carroll’s paper doesn’t say very much about what rules there must be. Indeed, it is consistent with what Carroll says there to think that the only rule is -elimination. You might think that modern Bayesians, who seem to think that the only rule of (...) inference they need is conditionalisation, have taken just this lesson from Carroll. But obviously nothing in Carroll’s argument rules out there being other rules as well. (shrink)
Barrett and Artzenius posed a problem concerning infinite sequences of decisions. It appeared that the strategy of making the rational choice at each stage of the game was, in some circumstances, guaranteed to lead to lower returns than the strategy of making the irrational choice at each stage. This paper shows that there is only the appearance of paradox. The choices that Barrett and Artzenius were calling ‘rational’ cannot be economically justified, and so it is not surprising that someone who (...) makes them ends up with sub-optimal returns. A solution to the more general problem they pose is also advanced. (shrink)
Assume, for fun, that temporal parts theory is true, and that some kind of modal realism (perhaps based on ersatz worlds) is true. Within this grand metaphysical picture, what are the ordinary objects? Do they have many temporal parts, or just one? Do they have many modal parts, or just one? I survey the issues involved in answering this question, including the problem of temporary intrinsics, the problem of the many, Kripke's objections to counterpart theory and quantifier domain restrictions.
F-relevant respects are never precisely defined, but the intuitive idea is clear enough. Smart- relevant respects are mental abilities, Philosopher-relevant respects presumably include where one is employed, what kinds of things one writes, etc, and, most importantly for this paper, the only Tall-relevant respect is height.
There’s two points left over from last week’s seminar still to discuss. The first is whether, as Lewis claims, we are justified in positing an asymmetry in the role of pragmatics. The second is whether this approach is at all justified. We’ll look at that before going on to the material scheduled for this week.
Our primary interest this week will be in two objections Jackson mentions which seem to threaten his program. Each of them is avoided by appeal to the two-dimensional framework we sketched last week. Before we go over that framework again, we will start by looking at the objections. For reasons that may become apparent shortly, we will look at them in reverse order. So first we’ll look at this objection from Chapter 3, an objection which turns on the discovery of (...) a posteriori necessities by Kripke and Putnam. (shrink)
There is a lot that we don’t know. That means that there are a lot of possibilities that are, epistemically speaking, open. For instance, we don’t know whether it rained in Seattle yesterday. So, for us at least, there is an epistemic possibility where it rained in Seattle yesterday, and one where it did not. It’s tempting to give a very simple analysis of epistemic possibility: • A possibility is an epistemic possibility if we do not know that it does (...) not obtain. But this is problematic for a few reasons. One issue, one that we’ll come back to, concerns the first two words. The analysis appears to quantify over possibilities. But what are they? As we said, that will become a large issue pretty soon, so let’s set it aside for now. A more immediate problem is that it isn’t clear what it is to have de re attitudes towards possibilities, such that we know a particular possibility does or doesn’t obtain. Let’s try rephrasing our analysis so that it avoids this complication. (shrink)
Recently, Timothy Williamson has argued that considerations about margins of errors can generate a new class of cases where agents have justified true beliefs without knowledge. I think this is a great argument, and it has a number of interesting philosophical conclusions. In this note I’m going to go over the assumptions of Williamson’s argument. I’m going to argue that the assumptions which generate the justification without knowledge are true. I’m then going to go over some of the recent arguments (...) in epistemology that are refuted by Williamson’s work. And I’m going to end with an admittedly inconclusive discussion of what we can know when using an imperfect measuring device. (shrink)
In two excellent recent papers, Jacob Ross has argued that the standard arguments for the ‘thirder’ answer to the Sleeping Beauty puzzle lead to violations of countable additivity. The problem is that most arguments for that answer generalise in awkward ways when he looks at the whole class of what he calls Sleeping Beauty problems. In this note I develop a new argument for the thirder answer that doesn't generalise in this way.
I defend normative externalism from the objection that it cannot account for the wrongfulness of moral recklessness. The defence is fairly simple—there is no wrong of moral recklessness. There is an intuitive argument by analogy that there should be a wrong of moral recklessness, and the bulk of the paper consists of a response to this analogy. A central part of my response is that if people were motivated to avoid moral recklessness, they would have to have an unpleasant sort (...) of motivation, what Michael Smith calls “moral fetishism”. (shrink)