I argue that believing that p implies having a credence of 1 in p. This is true because the belief that p involves representing p as being the case, representing p as being the case involves not allowing for the possibility of not-p, while having a credence that’s greater than 0 in not-p involves regarding not-p as a possibility.
In this paper I argue for a doctrine I call ?infallibilism?, which I stipulate to mean that If S knows that p, then the epistemic probability of p for S is 1. Some fallibilists will claim that this doctrine should be rejected because it leads to scepticism. Though it's not obvious that infallibilism does lead to scepticism, I argue that we should be willing to accept it even if it does. Infallibilism should be preferred because it has greater explanatory power (...) than fallibilism. In particular, I argue that an infallibilist can easily explain why assertions of ?p, but possibly not-p? (where the ?possibly? is read as referring to epistemic possibility) is infelicitous in terms of the knowledge rule of assertion. But a fallibilist cannot. Furthermore, an infallibilist can explain the infelicity of utterances of ?p, but I don't know that p? and ?p might be true, but I'm not willing to say that for all I know, p is true?, and why when a speaker thinks p is epistemically possible for her, she will agree (if asked) that for all she knows, p is true. The simplest explanation of these facts entails infallibilism. Fallibilists have tried and failed to explain the infelicity of ?p, but I don't know that p?, but have not even attempted to explain the last two facts. I close by considering two facts that seem to pose a problem for infallibilism, and argue that they don't. (shrink)
Concessive knowledge attributions (CKAs) are knowledge attributions of the form ‘S knows p, but it’s possible that q’, where q obviously entails not-p (Rysiew, Nous (Detroit, Mich.) 35:477–514, 2001). The significance of CKAs has been widely discussed recently. It’s agreed by all that CKAs are infelicitous, at least typically. But the agreement ends there. Different writers have invoked them in their defenses of all sorts of philosophical theses; to name just a few: contextualism, invariantism, fallibilism, infallibilism, and that the knowledge (...) rules for assertion and practical reasoning are false. In fact, there is a lot of confusion about CKAs and their significance. I try to clear some of this confusion up, as well as show what their significance is with respect to the debate between fallibilists and infallibilists about knowledge in particular. (shrink)
Timothy Williamson's epistemology leads to a fairly radical version of scepticism. According to him, all knowledge is evidence. It follows that if S knows p, the evidential probability for S that p is 1. I explain Williamson's infallibilist account of perceptual knowledge, contrasting it with Peter Klein's, and argue that Klein's account leads to a certain problem which Williamson's can avoid. Williamson can allow that perceptual knowledge is possible and that all knowledge is evidence, while at the same time avoiding (...) Klein's problem. But while Williamson can allow that we know some things through experience, there are very many things he must say we cannot know. Given just how very many these are, he should be considered a sceptic. (shrink)
We distinguish, among other things, between the agent of the context, the speaker of the agent's utterance, the mechanism the agent uses to produce her utterance, and the tokening of the sentence uttered. Armed with these distinctions, we tackle the the ‘answer-machine’, ‘post-it note’ and other allegedly problematic cases, arguing that they can be handled without departing significantly from Kaplan's semantical framework for indexicals. In particular, we argue that these cases don't require adopting Stefano Predelli's intentionalism.
Cartesian skepticism about epistemic justification (‘skepticism’) is the view that many of our beliefs about the external world – e.g., my current belief that I have hands – aren’t justified. I examine the two most influential arguments for skepticism – the Closure Argument and the Underdetermination Argument – from an evidentialist perspective. For both arguments it is clear which premise the anti-skeptic must deny. The Closure Argument, I argue, is the better argument in that its key premise is weaker than (...) the Underdetermination Argument’s key premise. However, it’s also likely that the motivation for accepting both key premises is exactly the same. So there may be a sense in which both arguments provide exactly the same motivation for skepticism. Then I argue that if I I’m right about what the motivation for accepting the arguments’ key premises is, then neither argument succeeds in providing a good reason to accept skepticism. I conclude by explaining why I think epistemologists are right to expend a lot of time and effort on refuting these arguments, even if neither argument provides any motivation for skepticism. (shrink)
If one flips an unbiased coin a million times, there are 2 1,000,000 series of possible heads/tails sequences, any one of which might be the sequence that obtains, and each of which is equally likely to obtain. So it seems (1) 'If I had tossed a fair coin one million times, it might have landed heads every time' is true. But as several authors have pointed out, (2) 'If I had tossed a fair coin a million times, it wouldn't have (...) come up heads every time' will be counted as true in everyday contexts. And according to David Lewis' influential semantics for counterfactuals, (1) and (2) are contradictories. We have a puzzle. We must either (A) deny that (2) is true, (B) deny that (1) is true, or (C) deny that (1) and (2) are contradictories, thus rejecting Lewis' semantics. In this paper I discuss and criticize the proposals of David Lewis and more recently J. Robert G. Williams which solve the puzzle by taking option (B). I argue that we should opt for either (A) or (C). (shrink)
According to the Imprecise Credence Framework (ICF), a rational believer's doxastic state should be modelled by a set of probability functions rather than a single probability function, namely, the set of probability functions allowed by the evidence ( Joyce  ). Roger White (  ) has recently given an arresting argument against the ICF, which has garnered a number of responses. In this article, I attempt to cast doubt on his argument. First, I point out that it's not an (...) argument against the ICF per se , but an argument for the Principle of Indifference. Second, I present an argument that's analogous to White's. I argue that if White's premises are true, the premises of this argument are too. But the premises of my argument entail something obviously false. Therefore, White's premises must not all be true. (shrink)
Several philosophers have claimed that S knows p only if S’ s belief is safe, where S's belief is safe iff (roughly) in nearby possible worlds in which S believes p, p is true. One widely held intuition many people have is that one cannot know that one's lottery ticket will lose a fair lottery prior to an announcement of the winner, regardless of how probable it is that it will lose. Duncan Pritchard has claimed that a chief advantage of (...) safety theory is that it can explain the lottery intuition without succumbing to skepticism. I argue that Pritchard is wrong. If a version of safety theory can explain the lottery intuition, it will also lead to skepticism. Content Type Journal Article Category Original Article Pages 1-26 DOI 10.1007/s10670-011-9305-z Authors Dylan Dodd, Department of Philosophy, Northern Institute of Philosophy, University of Aberdeen, Aberdeen, UK Journal Erkenntnis Online ISSN 1572-8420 Print ISSN 0165-0106. (shrink)
How can experience provide knowledge, or even justified belief, about the objective world outside our minds? This volume presents original essays by prominent contemporary epistemologists, who show how philosophical progress on foundational issues can improve our understanding of, and suggest a solution to, this famous sceptical question.
According to the traditional view of weakness of will, a weak-willed agent acts in a way inconsistent with what she judges to be best.1 Richard Holton has argued against this view, claiming that ‘the central cases of weakness of will are best characterized not as cases in which people act against their better judgment, but as cases in which they fail to act on their intentions’ (1999: 241). But Holton doesn’t think all failures to act on one’s prior intentions, or (...) all revisings of intentions, are cases of weakness of will (WW). Rather, he thinks an intention-revision is a case of WW only when it occurs ‘in circumstances in which [one] should not have revised [the intention]’. Holton points out that according to the traditional view of WW, to call an agent ‘weak-willed’ is to make descriptive claim about the agent (about whether an action in fact is inconsistent with what (s)he judges to be best). But according to Holton’s account, the question of whether the agent was weak-willed ‘will depend on which intentions [the agent] should have stuck with as a rational intender. That is a normative question’ (my emphasis) (241-3, 247-8. (shrink)
We’ve all been at parties where there's one cookie left on what was once a plate full of cookies, a cookie no one will eat simply because everyone is following a rule of etiquette, according to which you’re not supposed to eat the last cookie. Or at least we think everyone is following this rule, but maybe not. In this paper I present a new paradox, the Cookie Paradox, which is an argument that seems to prove that in any situation (...) in which everyone is truly following the rule, no one eats any cookies at all, no matter how many there are to be eaten. The ‘Cookie Argument’ resembles the more familiar argument that surprise exams are impossible, but it's not exactly the same. I argue that the biggest difference is that, unlike the surprise exam argument, the Cookie Argument is actually sound! I conclude the paper by explaining how it could be possible for a group of people to engage in behavior (eating cookies) that guarantees that at least one of the members of the group will violate a rule, even when it's common knowledge in the group that everyone is committed to following that very rule. (shrink)
Ordinary people make moral judgments that are consistent with philosophical and legal principles. Do those judgments derive from the controlled application of principles, or do the principles derive from automatic judgments? As a case study, we explore the tendency to judge harmful actions morally worse than harmful omissions (the ‘omission effect’) using fMRI. Because ordinary people readily and spontaneously articulate this moral distinction it has been suggested that principled reasoning may drive subsequent judgments. If so, people who exhibit the largest (...) omission effect should exhibit the greatest activation in regions associated with controlled cognition. Yet, we observed the opposite relationship: activation in the frontoparietal control network was associated with condemning harmful omissions—that is, with overriding the omission effect. These data suggest that the omission effect arises automatically, without the application of controlled cognition. However, controlled cognition is apparently used to overcome automatic judgment processes in order to condemn harmful omissions. (shrink)