We examine whether the "evidence of evidence is evidence" principle is true. We distinguish several different versions of the principle and evaluate recent attacks on some of those versions. We argue that, whatever the merits of those attacks, they leave the more important rendition of the principle untouched. That version is, however, also subject to new kinds of counterexamples. We end by suggesting how to formulate a better version of the principle that takes into account those new counterexamples.
Suppose we learn that we have a poor track record in forming beliefs rationally, or that a brilliant colleague thinks that we believe P irrationally. Does such input require us to revise those beliefs whose rationality is in question? When we gain information suggesting that our beliefs are irrational, we are in one of two general cases. In the first case we made no error, and our beliefs are rational. In that case the input to the contrary is misleading. In (...) the second case we indeed believe irrationally, and our original evidence already requires us to fix our mistake. In that case the input to that effect is normatively superfluous. Thus, we know that information suggesting that our beliefs are irrational is either misleading or superfluous. This, I submit, renders the input incapable of justifying belief revision, despite our not knowing which of the two kinds it is. (shrink)
When a belief is self-fulfilling, having it guarantees its truth. When a belief is self-defeating, having it guarantees its falsity. These are the cases of “self-impacting” beliefs to be examined below. Scenarios of self-defeating beliefs can yield apparently dilemmatic situations in which we seem to lack sufficient reason to have any belief whatsoever. Scenarios of self-fulfilling beliefs can yield apparently dilemmatic situations in which we seem to lack reason to have any one belief over another. Both scenarios have been used (...) independently to challenge Evidentialism, on which what we may rationally believe is all and only what fits our current evidence. Here we tie the two scenarios together and explore what a knowledge-sensitive evidentialist approach to one implies for the other. (shrink)
The Self-Intimation thesis has it that whatever justificatory status a proposition has, i.e., whether or not we are justified in believing it, we are justified in believing that it has that status. The Infallibility thesis has it that whatever justificatory status we are justified in believing that a proposition has, the proposition in fact has that status. Jointly, Self-Intimation and Infallibility imply that the justificatory status of a proposition closely aligns with the justification we have about that justificatory status. Self-Intimation (...) has two noteworthy implications. First, assuming that we never have sufficient justification for a proposition and for its negation, we can derive Infallibility from Self-Intimation. Interestingly, there seems to be no equivalently simple way to derive Self-Intimation from Infallibility. This asymmetry provides reason for thinking that bottom-level justification rather than top-level justification drives the explanation for why the levels of justification align. Second, Self-Intimation suggests a counterintuitive treatment of information concerning what justificatory status a proposition has. It follows from Self-Intimation that we always have justification for the truth about whether a proposition is justified for us, and therefore, that higher-order evidence could change what we should believe on this matter only by misleading us. This permits forming beliefs about whether a proposition is justified for us without regard to higher-order evidence, and thus reveals a reason for thinking that top-level justification is evidentially inert. (shrink)
Richard Feldman has proposed and defended different versions of a principle about evidence. In slogan form, the principle holds that ‘evidence of evidence is evidence’. Recently, Branden Fitelson has argued that Feldman’s preferred rendition of the principle falls pray to a counterexample related to the non-transitivity of the evidence-for relation. Feldman replies arguing that Fitelson’s case does not really represent a counterexample to the principle. In this note, we argue that Feldman’s principle is trivially true.
ABSTRACTShould conciliating with disagreeing peers be considered sufficient for reaching rational beliefs? Thomas Kelly argues that when taken this way, Conciliationism lets those who enter into a disagreement with an irrational belief reach a rational belief all too easily. Three kinds of responses defending Conciliationism are found in the literature. One response has it that conciliation is required only of agents who have a rational belief as they enter into a disagreement. This response yields a requirement that no one should (...) follow. If the need to conciliate applies only to already rational agents, then an agent must conciliate only when her peer is the one irrational. A second response views conciliation as merely necessary for having a rational belief. This alone does little to address the central question of what is rational to believe when facing a disagreeing peer. Attempts to develop the response either reduce to the first response, or deem necessary an unnecessary doxastic revision, or imply that rational dilemmas obtain in cases where intuitively there are none. A third response tells us to weigh what our pre-disagreement evidence supports against the evidence from the disagreement itself. This invites epistemic akrasia. (shrink)
Epistemically akratic agents believe both p and that believing p is irrational for them. Some of the costs of thinking that epistemic akrasia can be rational are clear. It is hypocritical, and outright weird, to have beliefs that we consider irrational, let alone to reason with or act on those beliefs. However, as Maria Lasonen-Aarnio (2020) and Brian Weatherson (2019) have argued, the weirdness of akrasia does not obviously tell against its rationality. Here I argue that views permitting epistemic akrasia (...) fare worse than previously thought. These views imply that we should sometimes have beliefs that we know for certain are either irrational or false. And while having a belief that we know to be irrational is straightforwardly irrational, the additional possibility that the belief may be false cannot make having it any more rational. (shrink)
What it means for an action to have moral worth, and what is required for this to be the case, is the subject of continued controversy. Some argue that an agent performs a morally worthy action if and only if they do it because the action is morally right. Others argue that a morally worthy action is that which an agent performs because of features that make the action right. These theorists, though they oppose one another, share something important in (...) common. They focus almost exclusively on the moral worth of right actions. But there is a negatively valenced counterpart that attaches to wrong actions, which we will call moral counterworth. In this paper, we explore the moral counterworth of wrong actions in order to shed new light on the nature of moral worth. Contrary to theorists in both camps, we argue that more than one kind of motivation can affect the moral worth of actions. (shrink)
Is it ever rational to suspend judgment about whether a particular doxastic attitude of ours is rational? An agent who suspends about whether her attitude is rational has serious doubts that it is. These doubts place a special burden on the agent, namely, to justify maintaining her chosen attitude over others. A dilemma arises. Providing justification for maintaining the chosen attitude would commit the agent to considering the attitude rational—contrary to her suspension on the matter. Alternatively, in the absence of (...) such justification, the attitude would be arbitrary by the agent's own lights, and therefore irrational from the agent's own perspective. So, suspending about whether an attitude of ours is rational does not cohere with considering it rationally preferable to other attitudes, and leads to a more familiar form of epistemic akrasia otherwise. (shrink)