NOTE: This paper is a reworking of some aspects of a previous paper of mine – ‘What else justification could be’ published in Noûs in 2010. I’m currently in the process of writing a book developing and defending some of the ideas from this paper. What follows will, I hope, fall into place as one of the chapters of this book – though it is still very much at the draft stage. Comments are welcome. -/- My concern in this paper (...) is with a certain, pervasive picture of epistemic justification. On this picture, acquiring justification for believing something is essentially a matter of minimising one’s risk of error – so one is justified in believing something just in case it is sufficiently likely, given one’s evidence, to be true. This view is motivated by an admittedly natural thought: If we want to be fallibilists about justification then we shouldn’t demand that something be certain – that we completely eliminate error risk – before we can be justified in believing it. But if justification does not require the complete elimination of error risk, then what could it possibly require if not its minimisation? If justification does not require epistemic certainty then what could it possibly require if not epistemic likelihood? When all is said and done, I’m not sure that I can offer satisfactory answers to these questions – but I will attempt to trace out some possible answers here. The alternative picture that I’ll outline makes use of a notion of normalcy that I take to be irreducible to notions of statistical frequency or predominance. (shrink)
Call Justificatory Probabilism (hereafter, JP) the thesis that there is some (classical) probability function Pr such that for an agent S with evidence E, the degree to which they are justified in believing a hypothesis H is given by Pr(H|E). As stated, the thesis is fairly ambiguous, though none of the disambiguations are obviously true. Indeed, several of them are obviously false. If JP is a thesis about how justified agents are in fully believing propositions, it is trivially false. I’m (...) about to flip a penny. Call H the proposition that it will land heads. Right now I’m completely unjustified in believing either H or ¬H . Yet according to JP, at least one of them must be half-justified. (shrink)
I argue that inferences from highly probabilifying racial generalizations are not solely objectionable because acting on such inferences would be problematic, or they violate a moral norm, but because they violate a distinctively epistemic norm. They involve accepting a proposition when, given the costs of a mistake, one is not adequately justified in doing so. First I sketch an account of the nature of adequate justification—practical adequacy with respect to eliminating the ~p possibilities from one’s epistemic statespace. Second, I argue (...) that inferences based on demographic generalizations tend to disproportionately expose group members to the risks associated with mistakenly assuming stereotypical propositions, and so magnify the wrong involved in relying on such inferences without adequate justification. (shrink)
Entitlement is defined as a sort of epistemic justification that one can possess by default – a sort of epistemic justification that does not need to be earned or acquired. Epistemologists who accept the existence of entitlement generally have a certain anti-sceptical role in mind for it – entitlement is intended to help us resist what would otherwise be compelling radical sceptical arguments. But this role leaves various details unspecified and, thus, leaves scope for a number of different potential conceptions (...) of entitlement. At one extreme there are conceptions that portray entitlement as a weak, attenuated epistemic status and, at the other, we have conceptions that portray entitlement as something potent and strong. Certain intermediate conceptions are also possible. In this paper, I shall argue that the weak and intermediate conceptions of entitlement do not survive careful scrutiny, and the stronger conceptions – while they do, in a way, strain credulity – are the only conceptions that are ultimately viable. (shrink)
In this article, I discuss three distinct but related puzzles involving lotteries: Kyburg’s lottery paradox, the statistical evidence problem, and the Harman-Vogel paradox. Kyburg’s lottery paradox is the following well-known problem: if we identify rational outright belief with a rational credence above a threshold, we seem to be forced to admit either that one can have inconsistent rational beliefs, or that one cannot rationally believe anything one is not certain of. The statistical evidence problem arises from the observation that people (...) seem to resist forming outright beliefs whenever the available evidence for the claim under consideration is purely statistical. We need explanations of whether it is in fact irrational to form such beliefs, and of whether a clear distinction can be drawn between statistical and non-statistical evidence. The Harman-Vogel paradox is usually presented as a paradox about knowledge: we tend to assume that we can know so-called ordinary propositions, such as the claim that I will be in Barcelona next spring. Yet, we hesitate to make knowledge claims regarding so-called lottery propositions, such as the claim that I won’t die in a car crash in the next few months, even if these lottery propositions are obviously entailed by the ordinary propositions we claim to know. Depending on one’s view about the relationship between rational belief and knowledge, the Harman-Vogel paradox has ramifications for a theory of rational outright belief. Formal theories of the relationship between rational credence and rational belief, such as Leitgeb’s stability theory, tend to focus mostly on handling Kyburg’s lottery paradox, but not the other two puzzles I mention. My aim in this article is to draw out relationships and differences between the puzzles, and to examine to what extent existing formal solutions to Kyburg’s lottery paradox help with answering the statistical evidence problem and the Harman-Vogel paradox. -/- . (shrink)
Recent years have seen the rise of a new family of non-probabilistic accounts of epistemic justification. According to these views—we may call them Normalcy Views—a belief in P is justified only if, given the evidence, there exists no normal world in which S falsely beliefs that P. This paper aims to raise some trouble for this new approach to justification by arguing that Normalcy Views, while initially attractive, give rise to problematic accounts of epistemic defeat. As we will see, on (...) Normalcy Views seemingly insignificant pieces of evidence turn out to have considerable defeating powers. This problem—I will call it the Easy-Defeat Problem—gives rise to a two-pronged challenge. First, it shows that the Normalcy View has counterintuitive implications and, second, it opens the door to an uncomfortable skeptical threat. (shrink)
What makes the difference between good and bad reasoning? In this paper we defend a novel account of good reasoning—both theoretical and practical—according to which it preserves fittingness or correctness: good reasoning is reasoning which is such as to take you from fitting attitudes to further fitting attitudes, other things equal. This account, we argue, is preferable to two others that feature in the recent literature. The first, which has been made prominent by John Broome, holds that the standards of (...) good reasoning derive from rational requirements. The second holds that these standards derive from reasons. We argue that these accounts face serious difficulties in correctly distinguishing good from bad reasoning, and in explaining what's worthwhile about good reasoning. We then propose our alternative account and argue that it performs better on these counts. In the final section, we develop certain elements of the account in response to some possible objections. (shrink)
In this paper we argue that knowledge is characteristically safe true belief. We argue that an adequate approach to epistemic luck must not be indexed to methods of belief formation, but rather to explanations for belief. This shift is problematic for several prominent approaches to the theory of knowledge, including virtue reliabilism and proper functionalism (as normally conceived). The view that knowledge is characteristically safe true belief is better able to accommodate the shift in question.
Given a few assumptions, the probability of a conjunction is raised, and the probability of its negation is lowered, by conditionalising upon one of the conjuncts. This simple result appears to bring Bayesian confirmation theory into tension with the prominent dogmatist view of perceptual justification – a tension often portrayed as a kind of ‘Bayesian objection’ to dogmatism. In a recent paper, David Jehle and Brian Weatherson observe that, while this crucial result holds within classical probability theory, it fails within (...) intuitionistic probability theory. They conclude that the dogmatist who is willing to take intuitionistic logic seriously can make a convincing reply to the Bayesian objection. In this paper, I argue that this conclusion is premature – the Bayesian objection can survive the transition from classical to intuitionistic probability, albeit in a slightly altered form. I shall conclude with some general thoughts about what the Bayesian objection to dogmatism does and doesn’t show. (shrink)
One of the deepest ideological divides in contemporary epistemology concerns the relative importance of belief versus credence. A prominent consideration in favor of credence-based epistemology is the ease with which it appears to account for rational action. In contrast, cases with risky payoff structures threaten to break the link between rational belief and rational action. This threat poses a challenge to traditional epistemology, which maintains the theoretical prominence of belief. The core problem, we suggest, is that belief may not be (...) enough to register all aspects of a subject’s epistemic position with respect to any given proposition. We claim this problem can be solved by introducing other doxastic attitudes—genuine representations—that differ in strength from belief. The resulting alternative picture, a kind of doxastic states pluralism, retains the central features of traditional epistemology—most saliently, an emphasis on truth as a kind of objective accuracy—while adequately accounting for rational action. (shrink)
It is tempting to posit an intimate relationship between belief and assertion. The speech act of assertion seems like a way of transferring the speaker’s belief to his or her audience. If this is right, then you might think that the evidential warrant required for asserting a proposition is just the same as the warrant for believing it. We call this thesis entitlement equality. We argue here that entitlement equality is false, because our everyday notion of belief is unambiguously a (...) weak one. Believing something is true, we argue, is compatible with having relatively little confidence in it. Asserting something requires something closer to complete confidence. Specifically, we argue that believing a proposition merely requires thinking it likely, but that thinking that a proposition is likely does not entitle one to assert it. This conclusion conflict with a standard view that ‘full belief’ is the central commonsense non-factive attitude. (shrink)
Here I advance a unified account of the structure of the epistemic normativity of assertion, action, and belief. According to my Teleological Account, all of these are epistemically successful just in case they fulfill the primary aim of knowledgeability, an aim which in turn generates a host of secondary epistemic norms. The central features of the Teleological Account are these: it is compact in its reliance on a single central explanatory posit, knowledge-centered in its insistence that knowledge sets the fundamental (...) epistemic norm, and yet fiercely pluralistic in its acknowledgment of the legitimacy and value of a rich range of epistemic norms distinct from knowledge. Largely in virtue of this pluralist character, I argue, the Teleological Account is far superior to extant knowledge-centered accounts. (shrink)
This book explores a question central to philosophy--namely, what does it take for a belief to be justified or rational? According to a widespread view, whether one has justification for believing a proposition is determined by how probable that proposition is, given one's evidence. In this book this view is rejected and replaced with another: in order for one to have justification for believing a proposition, one's evidence must normically support it--roughly, one's evidence must make the falsity of that proposition (...) abnormal in the sense of calling for special, independent explanation. This conception of justification bears upon a range of topics in epistemology and beyond. Ultimately, this way of looking at justification guides us to a new, unfamiliar picture of how we should respond to our evidence and manage our own fallibility. This picture is developed here. (shrink)
This paper concerns the apparent fact — discussed by Sinan Dogramaci and Brian Weatherson — that inductive reasoning often interacts in disastrous ways with patterns of reasoning that seem perfectly fine in the deductive case. In contrast to Dogramaci's and Weatherson's own suggestions, I argue that these cases show that we cannot reason inductively about arbitrary objects. Moreover, as I argue, this prohibition is neatly explained by a certain hypothesis about the rational basis of inductive reasoning — namely, the hypothesis (...) that inductive reasoning is fundamentally reasoning about what normally happens. (shrink)
Recent attempts to resolve the Paradox of the Gatecrasher rest on a now familiar distinction between individual and bare statistical evidence. This paper investigates two such approaches, the causal approach to individual evidence and a recently influential (and award-winning) modal account that explicates individual evidence in terms of Nozick's notion of sensitivity. This paper offers counterexamples to both approaches, explicates a problem concerning necessary truths for the sensitivity account, and argues that either view is implausibly committed to the impossibility of (...) no-fault wrongful convictions. The paper finally concludes that the distinction between individual and bare statistical evidence cannot be maintained in terms of causation or sensitivity. We have to look elsewhere for a solution of the Paradox of the Gatecrasher. (shrink)
What is knowledge? What should knowledge be like? Call an epistemological project that sets out to answer the first question ‘descriptive’ and a project that sets out to answer the second question ‘normative’. If the answers to these two questions don’t coincide—if what knowledge should be like differs from what knowledge is like—there is room for a third project we call ‘revisionary’. A revisionary project starts by arguing that what knowledge should be differs from what knowledge is. It then proposes (...) that we revise our account of knowledge accordingly. Our aim in this paper was to develop a methodology for revisionary projects in epistemology. Put roughly, the thought is that we start by looking at the various things that we expect knowledge to do for us. Once we have a list of the various things we expect knowledge to do for us we have a ‘job description’; a list of tasks we need done, and that we expect knowledge to perform. With the job description in hand, we can ask what knowledge would hav.. (shrink)
Knowledge norms of action are sometimes said to be motivated by the fact that they align with natural assessments of action in ordinary language. Competent and rational speakers normally use ‘knowledge’ and its cognates when they assess action. In contrast, competing accounts in terms of evidence, warrant or reliability do not straightforwardly align with ordinary language assessments of action. In response to this line of reasoning, I argue that a warrant account of action may explain the prominence of ‘knowledge’ in (...) epistemic assessments better than the knowledge account. If this explanation is successful, it undermines a central rationale for the ‘knowledge first’ program in epistemology. Moreover, the explanation provides an insight into the social functions of knowledge ascriptions as well as a methodological lesson about the relationship between folk epistemology and epistemological theorizing. (shrink)
This book is about the norms of the speech act of assertion. This is a topic of lively contemporary debate primarily carried out in epistemology and philosophy of language. Suppose that you ask me what time an upcoming meeting starts, and I say, “4 p.m.” I’ve just asserted that the meeting starts at 4 p.m. Whenever we make claims like this, we’re asserting. The central question here is whether we need to know what we say, and, relatedly, whether what we (...) assert must be true. If the meeting is really at 3:30 p.m., you’ll be late, and probably rather upset that I told you the wrong time. In some sense, it seems like I’m on the hook for having said something false. This sense that I’ve done something wrong suggests that there are certain standards of evaluating assertions: a way of distinguishing between good and bad, appropriate and inappropriate. We call these standards norms. And so the debate about what, if any, norms govern the linguistic practice of assertion is known as the norms of assertion debate. When one’s assertion satisfies the norm, we say that the assertion is warranted. -/- Various philosophers have typically focused their views of the norms of assertion on articulating the level of epistemic support required for properly asserting. Some argue, for example, that one must know what one asserts. Others argue that one merely needs to justifiably believe what one asserts–an epistemic standing weaker than knowledge. The purpose of this book is to defend what I propose as the central norm governing our practice of assertion, which I call the Supportive Reasons Norm. Here’s what it looks like: -/- One may assert that p only if: One has supportive reasons for p, The relevant conventional and pragmatic elements of the context are present, and One asserts that p at least in part because the assertion that p satisfies and. -/- In rough outline, the standards for warrantedly asserting shift with changes in context, although knowledge is never required for warrantedly asserting. In fact, in some special contexts, speakers may warrantedly lie. This latter feature particularly sets apart my view from others in the debate. This also means that truth, knowledge, and even belief aren’t necessary conditions for warrantedly asserting. (shrink)
Mathematicians do not claim to know a proposition unless they think they possess a proof of it. For all their confidence in the truth of a proposition with weighty non-deductive support, they maintain that, strictly speaking, the proposition remains unknown until such time as someone has proved it. This article challenges this conception of knowledge, which is quasi-universal within mathematics. We present four arguments to the effect that non-deductive evidence can yield knowledge of a mathematical proposition. We also show that (...) some of what mathematicians take to be deductive knowledge is in fact non-deductive. 1 Introduction2 Why It Might Matter3 Two Further Examples and Preliminaries4 An Exclusive Epistemic Virtue of Proof?5 Analyses of Knowledge6 The Inductive Basis of Deduction7 Physical to Mathematical Linkages8 Conclusion. (shrink)
This paper offers three objections to Leslie’s recent and already influential theory of generics :375–403, 2007a, Philos Rev 117:1–47, 2008): her proposed metaphysical truth-conditions are subject to systematic counter-examples, the proposed disquotational semantics fails, and there is evidence that generics do not express cognitively primitive generalisations.
Williamson (2000) appeals to considerations about when it is natural to say that a hypothesis is consistent with one’s evidence in order to motivate the claim that all and only knowledge is evidence. It is argued here that the relevant considerations do not support this claim, and in fact conflict with it.
Hyman (1999, 2006) argues that knowledge is best conceived as a kind of ability: S knows that p iff S can φ for the reason that p. Hyman motivates this thesis by appealing to Gettier cases. I argue that it is counterexampled by a certain kind of Gettier case where the fact that p is a cause of the subject’s belief that p. One can φ for the reason that p even if one does not know that p. So knowledge (...) is not best conceived as an ability of this kind. (shrink)
This paper will articulate and defend a novel theory of epistemic justification; I characterize my view as the thesis that justification is potential knowledge . My project is an instance of the ‘knowledge-first’ programme, championed especially by Timothy Williamson. So I begin with a brief recapitulation of that programme.
A recent argument by Nadelhoffer et al. defends a cautious optimism regarding the use of neuroprediction in relation to sentencing based, in part, on an assessment of the offender’s dangerousness. While this optimism may be warranted, Nadelhoffer et al.’s argument fails to justify it. Although neuropredictions provide individualized, non-statistical evidence they will often be problematic for the same reason that basing sentencing on statistical evidence is, to wit, that such predictions are insensitive to the offender’s dangerousness in relevant counterfactual situations (...) and, accordingly, fail to provide the court with knowledge of the offender’s dangerousness. Admittedly, it could be replied that standard clinical assessments of dangerousness possess the same objectionable feature, but doing so undermines a different part of Nadelhoffer et al.’s argument. Finally, I criticize an incentives-based rationale for sentencing informed by neuropredictions of dangerousness. (shrink)
According to a tradition reaching back to Plato, questions about the nature of knowledge are to be answered by offering an analysis in terms of truth, belief, justification, and other factors presumed to be in some sense more basic than knowledge itself. In light of the apparent failure of this approach, knowledge first philosophy instead takes knowledge as the starting point in epistemology and related areas of the philosophies of language and mind. Knowledge cannot be analyzed in the traditional sense, (...) but this does not make it mysterious or unimportant. On the contrary, we are freed to use our grasp of what knowledge is to elucidate the nature of belief, justification, evidence, the speech act of assertion, and the demands on action and practical reasoning, and to treat knowledge as a purely mental state in its own right. Knowledge First? offers the first overview and critical evaluation of knowledge first philosophy as a whole. (shrink)
Some series can go on indefinitely, others cannot, and epistemologists want to know in which class to place epistemic chains. Is it sensible or nonsensical to speak of a proposition or belief that is justified by another proposition or belief, ad infinitum? In large part the answer depends on what we mean by “justification.” Epistemologists have failed to find a definition on which everybody agrees, and some have even advised us to stop looking altogether. In spite of this, the present (...) essay submits a few candidate definitions. It argues that, although not giving the final word, these candidates tell us something about the possibility of infinite epistemic chains. And it shows that they can short-circuit a debate about doxastic justification. (shrink)
I argue that Greco’s handling of barn-façade cases is unsatisfactory as it is at odds with his treatment of standard Gettier cases. I contend that this is so as there is no salient feature of either type of case such that that feature provides a ground to grant, as Greco argues, that there is an exercising of ability in one type of case, standard Gettier cases, but not in the other, barn-façade cases. The result, I argue, is that either Greco (...) must revise his grounds for treating barn-façade cases as he does or he must revise his treatment of standard Gettier cases. (shrink)
Say that two goals are normatively coincident just in case one cannot aim for one goal without automatically aiming for the other. While knowledge and justification are distinct epistemic goals, with distinct achievement conditions, this paper begins from the suggestion that they are nevertheless normatively coincident—aiming for knowledge and aiming for justification are one and the same activity. A number of surprising consequences follow from this—both specific consequences about how we can ascribe knowledge and justification in lottery cases and more (...) general consequences about the nature of justification and the relationship between justification and evidential probability. Many of these consequences turn out to be at variance with conventional, prevailing views. (shrink)
In Knowledge and Lotteries, John Hawthorne offers a diagnosis of our unwillingness to believe, of a given lottery ticket, that it will lose a fair lottery – no matter how many tickets are involved. According to Hawthorne, it is natural to employ parity reasoning when thinking about lottery outcomes: Put roughly, to believe that a given ticket will lose, no matter how likely that is, is to make an arbitrary choice between alternatives that are perfectly balanced given one’s evidence. It’s (...) natural to think that parity reasoning is only applicable to situations involving lotteries dice, spinners etc. – in short, situations in which we are reasoning about the outcomes of a putatively random process. As I shall argue in this paper, however, there are reasons for thinking that parity reasoning can be applied to any proposition that is less than certain given one’s evidence. To see this, we need only remind ourselves of a kind of argument employed by John Pollock and Keith Lehrer in the 1980s. If this argument works, then believing any uncertain proposition, no matter how likely it is, involves a (covert) arbitrary or capricious choice – an idea that contains an obvious sceptical threat. (shrink)
We commonly say that some evidence supports a hypothesis or that some premise evidentially supports a conclusion. Both internalists and externalists attempt to analyze this notion of evidential support, and the primary purpose of this paper is to argue that reliabilist and proper functionalist accounts of this relation fail. Since evidential support is one component of inferential justification, the upshot of this failure is that their accounts of inferential justification also fail. In Sect. 2, I clarify the evidential support relation. (...) In Sects. 3–5, I subject reliabilist and proper functionalist accounts of evidential support to various counterexamples. In Sect. 6, I show that the most promising ways to address these counterexamples aren’t very promising. (shrink)
If we add as an extra premise that the agent does know H, then it is possible for her to know E H, we get the conclusion that the agent does not really know H. But even without that closure premise, or something like it, the conclusion seems quite dramatic. One possible response to the argument, floated by both Descartes and Hume, is to accept the conclusion and embrace scepticism. We cannot know anything that goes beyond our evidence, so (...) we do not know very much at all. This is a remarkably sceptical conclusion, so we should resist it if at all possible. A more modern response, associated with externalists like John McDowell and Timothy Williamson, is to accept the conclusion but deny it is as sceptical as it first appears. The Humean argument, even if it works, only shows that our evidence and our knowledge are more closely linked than we might have thought. Perhaps that’s true because we have a lot of evidence, not because we have very little knowledge. There’s something right about this response I think. We have more evidence than Descartes or Hume thought we had. But I think we still need the idea of ampliative knowledge. It stretches the concept of evidence to breaking point to suggest that all of our knowledge, including knowledge about the future, is part of our evidence. So the conclusion really is unacceptable. Or, at least, I think we should try to see what an epistemology that rejects the conclusion looks like. (shrink)
After providing some historical and systematic background, I introduce the structure of a very natural and influential sceptical underdetermination argument. The argument assumes that it is metaphysically possible for a deceived subject to have the same evidence that a non-deceived subject has, and tries to draw consequences about justification from that assumption of metaphysical possibility. I first variously object to the transition from the assumption to its supposed consequences. In the central part of the paper, I then critically consider some (...) influential ways of bridging the gap between the assumption and its supposed consequences, which generally consist in strengthening the assumption from one of metaphysical possibility into one of either counterfactual implication or entailment. The discussion indicates that epistemic facts are much more independent from metaphysically modal facts than the sceptical underdetermination argument requires. (shrink)
One of the hardest problems in the history of Western philosophy has been to explain whether and how experience can provide knowledge about the objective world outside the experiencer's mind. A prominent brand of scepticism has precisely denied that experience can provide such knowledge. How, for instance can I know that my experiences are not produced in me by a powerful demon? This volume, originating from the research project on Basic Knowledge recently concluded at the Northern Institute of Philosophy, presents (...) new essays on scepticism about the senses written by some of the most prominent contemporary epistemologists. They approach the sceptical challenge by discussing such topics as the conditions for perceptual justification, the existence of a non-evidential kind of warrant and the extent of one's evidence, the epistemology of inference, the relations between justification, probability and certainty, the relevance of subjective appearances to the epistemology of perception, the role that broadly pragmatic considerations play in epistemic justification, the contents of perception, and the function of attention. In all these cases, the papers show how philosophical progress on foundational issues can improve our understanding of and possibly afford a solution to a historically prominent problem like scepticism. (shrink)
Ichikawa and Jarvis offer a new rationalist theory of mental content and defend a traditional epistemology of philosophy. They argue that philosophical inquiry is continuous with non-philosophical inquiry, and can be genuinely a priori, and that intuitions do not play an important role in mental content or the a priori.
The lottery paradox plays an important role in arguments for various norms of assertion. Why is it that, prior to information on the results of a draw, assertions such as, “My ticket lost,” seem inappropriate? This paper is composed of two projects. First, I articulate a number of problems arising from Timothy Williamson’s analysis of the lottery paradox. Second, I propose a relevant alternatives theory, which I call the Non-Destabilizing Alternatives Theory , that better explains the pathology of asserting lottery (...) propositions, while permitting assertions of what I call fallible propositions such as, “My car is in the driveway.” Le paradoxe de la loterie joue un rôle important dans l’argumentation visant à défendre diverses normes de l’assertion. Comment se fait-il que, avant que les résultats d’un tirage soient connus, des assertions comme «Mon billet a perdu» semblent inappropriées? Cet article se compose de deux projets. Premièrement, je relève certains problèmes issus de l’analyse du paradoxe de la loterie par Timothy Williamson. Deuxièmement, je propose une théorie des alternatives pertinentes que j’appelle la «théorie des alternatives non-déstabilisantes» , et qui explique d’une meilleure façon la pathologie de l’assertion de propositions concernant la loterie, tout en permettant des assertions faillibles, telles que «Ma voiture est dans l’entrée». (shrink)
The idea that knowledge can be extended by inference from what is known seems highly plausible. Yet, as shown by familiar preface paradox and lottery-type cases, the possibility of aggregating uncertainty casts doubt on its tenability. We show that these considerations go much further than previously recognized and significantly restrict the kinds of closure ordinary theories of knowledge can endorse. Meeting the challenge of uncertainty aggregation requires either the restriction of knowledge-extending inferences to single premises, or eliminating epistemic uncertainty in (...) known premises. The first strategy, while effective, retains little of the original idea—conclusions even of modus ponens inferences from known premises are not always known. We then look at the second strategy, inspecting the most elaborate and promising attempt to secure the epistemic role of basic inferences, namely Timothy Williamson’s safety theory of knowledge. We argue that while it indeed has the merit of allowing basic inferences such as modus ponens to extend knowledge, Williamson’s theory faces formidable difficulties. These difficulties, moreover, arise from the very feature responsible for its virtue- the infallibilism of knowledge. (shrink)
Entitlement is conceived as a kind of positive epistemic status, attaching to certain propositions, that involves no cognitive or intellectual accomplishment on the part of the beneficiary — a status that is in place by default. In this paper I will argue that the notion of entitlement — or something very like it — falls out of an idea that may at first blush seem rather disparate: that the evidential support relation can be understood as a kind of variably strict (...) conditional (in the sense of Lewis 1973). Lewis provided a general recipe for deriving what he termed inner modalities from any variably strict conditional governed by a logic meeting certain constraints. On my proposal, entitlement need be nothing more exotic than the inner necessity associated with evidential support. Understanding entitlement in this way helps to answer some common concerns — in particular, the concern that entitlement could only be a pragmatic, and not genuinely epistemic, status. (shrink)
A popular account of epistemic justification holds that justification, in essence, aims at truth. An influential objection against this account points out that it is committed to holding that only true beliefs could be justified, which most epistemologists regard as sufficient reason to reject the account. In this paper I defend the view that epistemic justification aims at truth, not by denying that it is committed to epistemic justification being factive, but by showing that, when we focus on the relevant (...) sense of ‘justification’, it isn’t in fact possible for a belief to be at once justified and false. To this end, I consider and reject three popular intuitions speaking in favor of the possibility of justified false beliefs, and show that a factive account of epistemic justification is less detrimental to our normal belief forming practices than often supposed. (shrink)
Several philosophers have claimed that S knows p only if S’ s belief is safe, where S's belief is safe iff (roughly) in nearby possible worlds in which S believes p, p is true. One widely held intuition many people have is that one cannot know that one's lottery ticket will lose a fair lottery prior to an announcement of the winner, regardless of how probable it is that it will lose. Duncan Pritchard has claimed that a chief advantage of (...) safety theory is that it can explain the lottery intuition without succumbing to skepticism. I argue that Pritchard is wrong. If a version of safety theory can explain the lottery intuition, it will also lead to skepticism. Content Type Journal Article Category Original Article Pages 1-26 DOI 10.1007/s10670-011-9305-z Authors Dylan Dodd, Department of Philosophy, Northern Institute of Philosophy, University of Aberdeen, Aberdeen, UK Journal Erkenntnis Online ISSN 1572-8420 Print ISSN 0165-0106. (shrink)
The law views with suspicion statistical evidence, even evidence that is probabilistically on a par with direct, individual evidence that the law is in no way suspicious of. But it has proved remarkably hard to either justify this suspicion, or to debunk it. In this paper, we connect the discussion of statistical evidence to broader epistemological discussions of similar phenomena. We highlight Sensitivity – the requirement that a belief be counterfactually sensitive to the truth in a specific way – as (...) a way of epistemically explaining the legal suspicion towards statistical evidence. Still, we do not think of this as a satisfactory vindication of the reluctance to rely on statistical evidence. Knowledge – and Sensitivity, and indeed epistemology in general – are of little, if any, legal value. Instead, we tell an incentive-based story vindicating the suspicion towards statistical evidence. We conclude by showing that the epistemological story and the incentive-based story are closely and interestingly related, and by offering initial thoughts about the role of statistical evidence in morality. (shrink)
This paper discusses there is no sustainable theoretical alternative for building knowledge without principles including cooperation –aimed at the preparation and distribution of beliefs– among individuals. This principle helps to conceive both the relation among internalist and externalist theories, and a cognitive explanation based on the concept of epistemic warrant. The concluding remark is that concepts, like evidence or reliability, can only be conceived as skills of subjects belonging to a community.
In light of the failure of attempts to analyse knowledge as a species of justified belief, a number of epistemologists have suggested that we should instead understand justification in terms of knowledge. This paper focuses on accounts of justification as a kind of knowledge. According to such accounts a belief is justified just in case any failure to know is due to uncooperative external circumstances. I argue against two recent accounts of this sort due to Alexander Bird and Martin Smith. (...) A further aim is to defend a more traditional conception, according to which justification is a matter of sufficiently high evidential likelihood. In particular, I suggest that this conception of justification offers a plausible account of lottery cases: cases in which one believes a true proposition on the basis of probabilistic evidence. (shrink)
In ‘The normative role of knowledge’ (2012), Declan Smithies defends a ‘JK-rule’ for belief: One has justification to believe that P iff one has justification to believe that one is in a position to know that P. Similar claims have been defended by others (Huemer, 2007, Reynolds, forthcoming). In this paper, I shall argue that the JK-rule is false. The standard and familiar way of arguing against putative rules for belief or assertion is, of course, to describe putative counterexamples. My (...) argument, though, won’t be like this – indeed I doubt that there are any intuitively compelling counterexamples to the JK-rule. Nevertheless, the claim that there are counterexamples to the JK-rule can, I think, be given something approaching a formal proof. My primary aim here is to sketch this proof. I will briefly consider some broader implications for how we ought to think about the epistemic standards governing belief and assertion. (shrink)
This paper addresses an argument offered by John Hawthorne gainst the propriety of an agent’s using propositions she does not know as premises in practical reasoning. I will argue that there are a number of potential structural confounds in Hawthorne’s use of his main example, a case of practical reasoning about a lottery. By drawing these confounds out more explicitly, we can get a better sense of how to make appropriate use of such examples in theorizing about norms, knowledge, and (...) practical reasoning. I will conclude by suggesting a prescription for properly using lottery propositions to do the sort of work that Hawthorne wants from them. (shrink)
In this paper I draw attention to a peculiar epistemic feature exhibited by certain deductively valid inferences. Certain deductively valid inferences are unable to enhance the reliability of one's belief that the conclusion is true—in a sense that will be fully explained. As I shall show, this feature is demonstrably present in certain philosophically significant inferences—such as GE Moore's notorious 'proof' of the existence of the external world. I suggest that this peculiar epistemic feature might be correlated with the much (...) discussed phenomenon that Crispin Wright and Martin Davies have called 'transmission failure'—the apparent failure, on the part of some deductively valid inferences to transmit one's justification for believing the premises. (shrink)