We offer a general framework for theorizing about the structure of knowledge and belief in terms of the comparative normality of situations compatible with one’s evidence. The guiding idea is that, if a possibility is sufficiently less normal than one’s actual situation, then one can know that that possibility does not obtain. This explains how people can have inductive knowledge that goes beyond what is strictly entailed by their evidence. We motivate the framework by showing how it illuminates knowledge about (...) the future, knowledge of lawful regularities, knowledge about parameters measured using imperfect instruments, the connection between knowledge, belief, and probability, and the dynamics of knowledge and belief in response to new evidence. (shrink)
Dorr et al. present a case that poses a challenge for a number of plausible principles about knowledge and objective chance. Implicit in their discussion is an interesting new argument against KK, the principle that anyone who knows p is in a position to know that they know p. We bring out this argument, and investigate possible responses for defenders of KK, establishing new connections between KK and various knowledge-chance principles.
There are many things—call them ‘experts’—that you should defer to in forming your opinions. The trouble is, many experts are modest: they’re less than certain that they are worthy of deference. When this happens, the standard theories of deference break down: the most popular (“Reflection”-style) principles collapse to inconsistency, while their most popular (“New-Reflection”-style) variants allow you to defer to someone while regarding them as an anti-expert. We propose a middle way: deferring to someone involves preferring to make any decision (...) using their opinions instead of your own. In a slogan, deferring opinions is deferring decisions. Generalizing the proposal of Dorst (2020a), we first formulate a new principle that shows exactly how your opinions must relate to an expert’s for this to be so. We then build off the results of Levinstein (2019) and Campbell-Moore (2020) to show that this principle is also equivalent to the constraint that you must always expect the expert’s estimates to be more accurate than your own. Finally, we characterize the conditions an expert’s opinions must meet to be worthy of deference in this sense, showing how they sit naturally between the too-strong constraints of Reflection and the too-weak constraints of New Reflection. (shrink)
An important question in epistemology is whether the KK principle is true, i.e., whether an agent who knows that p is also thereby in a position to know that she knows that p. We explain how a “transparency” account of self-knowledge, which maintains that we learn about our attitudes towards a proposition by reflecting not on ourselves but rather on that very proposition, supports an affirmative answer. In particular, we show that such an account allows us to reconcile a version (...) of the KK principle with an “externalist” or “reliabilist” conception of knowledge commonly thought to make that principle particularly problematic. (shrink)
Suppose you’d like to believe that p, whether or not it’s true. What can you do to help? A natural initial thought is that you could engage in Intentionally Biased Inquiry : you could look into whether p, but do so in a way that you expect to predominantly yield evidence in favour of p. This paper hopes to do two things. The first is to argue that this initial thought is mistaken: intentionally biased inquiry is impossible. The second is (...) to show that reflections on intentionally biased inquiry strongly support a controversial ‘access’ principle which states that, for all p, if p is part of our evidence, then that p is part of our evidence is itself part of our evidence. (shrink)
Epistemologists have recently noted a tension between (i) denying access internalism, and (ii) maintaining that rational agents cannot be epistemically akratic, believing claims akin to ‘p, but I shouldn’t believe p’. I bring out the tension, and develop a new way to resolve it. The basic strategy is to say that access internalism is false, but that counterexamples to it are ‘elusive’ in a way that prevents rational agents from suspecting that they themselves are counterexamples to the internalist principles. I (...) argue that this allows us to do justice to the motivations behind both (i) and (ii). And I explain in some detail what a view of evidence that implements this strategy, and makes it independently plausible, might look like. (shrink)
Good’s theorem is the apparent platitude that it is always rational to ‘look before you leap’: to gather information before making a decision when doing so is free. We argue that Good’s theorem is not platitudinous and may be false. And we argue that the correct advice is rather to ‘make your act depend on the answer to a question’. Looking before you leap is rational when, but only when, it is a way to do this.
Philosophers have recently attempted to justify particular belief revision procedures by arguing that they are the optimal means towards the epistemic end of accurate credences. These attempts, however, presuppose that means should be evaluated according to classical expected utility theory; and there is a long tradition maintaining that expected utility theory is too restrictive as a theory of means–end rationality, ruling out too many natural ways of taking risk into account. In this paper, we investigate what belief-revision procedures are supported (...) by accuracy-theoretic considerations once we depart from expected utility theory to allow agents to be risk-sensitive. We argue that if accuracy-theoretic considerations tell risk-sensitive agents anything about belief-revision, they tell them the same thing they tell risk-neutral agents: they should conditionalize. (shrink)
The status of the knowledge iteration principles in the account provided by Lewis in “Elusive Knowledge” is disputed. By distinguishing carefully between what in the account describes the contribution of the attributor’s context and what describes the contribution of the subject’s situation, we can resolve this dispute in favour of Holliday’s claim that the iteration principles are rendered invalid. However, that is not the end of the story. For Lewis’s account still predicts that counterexamples to the negative iteration principle ) (...) come out as elusive: such counterexamples can occur only in possibilities which the attributors of knowledge are ignoring. This consequence is more defensible than it might look at first sight. (shrink)
Sometimes changes in an agent's partial values can cast a positive light on an earlier action, which was wrong when it was performed. Based on independent reflections about the role of partiality in determining when blame is appropriate, I argue that in such cases the agent shouldn't feel remorse about her action and that others can't legitimately blame her for it, even though that action was wrong. The action thus receives a certain kind of retrospective justification.