Skip to main content
Log in

How to (Blind)Spot the Truth: An Investigation on Actual Epistemic Value

  • Original Research
  • Published:
Erkenntnis Aims and scope Submit manuscript

Abstract

This paper is about the alethic aspect of epistemic rationality. The most common approaches to this aspect are either normative (what a reasoner ought to/may believe?) or evaluative (how rational is a reasoner?), where the evaluative approaches are usually comparative (one reasoner is assessed compared to another). These approaches often present problems with blindspots. For example, ought a reasoner to believe a currently true blindspot? Is she permitted to? Consequently, these approaches often fail in describing a situation of alethic maximality, where a reasoner fulfills all the alethic norms and could be used as a standard of rationality (as they are, in fact, used in some of these approaches). I propose a function \(\alpha\), which accepts a set of beliefs as input and returns a numeric alethic value. Then I use this function to define a notion of alethic maximality that is satisfiable by finite reasoners (reasoners with cognitive limitations) and does not present problems with blindspots. Function \(\alpha\) may also be used in alethic norms and evaluation methods (comparative and non-comparative) that may be applied to finite reasoners and do not present problems with blindspots. A result of this investigation is that the project of providing purely alethic norms is defective. The use of function \(\alpha\) also sheds light on important epistemological issues, such as the lottery and the preface paradoxes, and the principles of clutter avoidance and reflection.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. Douven (2013) uses computer simulations to resist the conclusion that Bayesian conditionalization should be regarded as the rational rule for updating credences on the grounds that if a reasoner uses a rule at odds with conditionalization, then the reasoner himself would find that conditionalization minimizes expected inaccuracy (in relation to her own rule) given her own credences (Leitgeb and Pettigrew 2010b). He constructs simulations where two reasoners, a Bayesian and an “explanationist”, who updates credences using a version of the inference to the best explanation, watch sequences of coin tosses and must estimate the extent of the coin’s bias. The Bayesian minimizes expected inaccuracy of every update, but the eplanationist minimizes actual inaccuracy at the end of the sequence (in most trials). Douven (2013, p. 438) remarks that “it would seem absurd to claim that it is epistemically more important to have an update rule that minimises expected inaccuracy than to have one that actually minimises inaccuracy”.

  2. Informally, a finite reasoner has cognitive limitations such as finite input (perception, etc transmits only a finite amount of information), finite memory (memory can store only a finite amount of information), and finite computational power (reasoning can execute only finitely many operations, such as inferential steps, in a finite time interval). I discuss the notion of a finite reasoner in Sect. 3.3.

  3. For example, although Leitgeb (2014, fn. 3) recognizes that “ultimately, we should be concerned with real-world agents”, his “perfectly rational reasoner” (p. 137) is logically omniscient. Finite reasoners cannot, even in principle, (explicitly?) believe all the infinitely many logical consequences of their beliefs because of their finite memory.

  4. For this reason, I partially disagree with the remark in Leitgeb (2014, fn. 3) that it is (always) a good methodological strategy to describe ideal agents (without cognitive limitations), “whom we should strive to approximate”. For example, I do not believe that we should strive to be logically omniscient. In other words, I do not believe that we should strive to clutter our (limited) memory with (infinitely) many logical truths and logical consequences (see Harman 1986, p. 12). This attempt would lead to a form of cognitive paralysis since we would spend all of our cognitive resources in deriving logical truths and consequences, which would prevent us from fulfilling our (epistemic or practical) goals. Also, attempting to believe all the logical consequences of our beliefs is not truth-conducive in general because believing the logical consequences of a false belief may result in an amplification of the initial mistake. My concerns with finite reasoning are also related to the fact that reasoners without cognitive limitations are not fully implementable in a computer simulation. It is an advantage of my framework that it enables the investigation of \(\alpha\)-maximal (but finte) reasoners using the tools of Computational Epistemology.

  5. In general, a blindspot given a propositional attitude A and a reasoner \({\mathcal{R}}\) is a proposition that is possibly true but cannot have attitude A taken towards it by \({\mathcal{R}}\). I am dealing only with the case of \(A=\) true-belief. A proposition \(\phi\) is truly-believable by \({\mathcal{R}}\) iff it is possible that \(\phi\) is true while believed by \({\mathcal{R}}\).

  6. A mundane truth is a contingent truth that is not about (the beliefs of) the reasoner \({\mathcal{R}}\) (e.g. the proposition that snow is white).

  7. Philosophers often distinguish between subjective and objective norms (e.g. Carr 2020), where the subjective but not the objective norms are sensitive to the information that is available to the reasoner (e.g. what is epistemically possible for the reasoner). For this reason, subjective norms are often deliberative, in the sense of being action-guiding. Alethic norms are objective because the truth-value of beliefs is often not transparent for reasoners. Consequently, these norms are often not deliberative, except in some cases when the relevant truth-values are transparent (e.g. contradictions and blindspots). The same holds for evaluation methods.

  8. In deontic logic, ‘may’ is usually modeled as diamond-like (as requiring truth in a permissible situation, where a permissible situation is one in which nobody does what is not permitted by the norms). The diamond does not aggregate over conjunctions in the standard modal logics.

  9. There exist interesting versions of n4 and n5 without the clause ‘\(\phi\) is true’ (let those be n4’ and n5’), but similar problems hold for n4 and n5 and to n4’ and n5’. About norms n4’ and n5’, see footnote 40.

  10. In deontic logic, the ‘ought’ is often modeled as box-like and the box aggregates over conjunctions in the standard modal logics.

  11. An anonymous reviewer has proposed method m1, which is very crude indeed. I have introduced m1 as a dialectical tool for discussing problems of evaluation methods in general. Another anonymous reviewer has pointed out that m1 would be inadequate independently of blindspots. If the objects of beliefs are centered propositions, then a reasoner cannot believe the propositions centered in another reasoner, which results in incommensurable partial orderings of rationality independently of blindspots. This result is expected due to the nature of centered propositions and de se beliefs. However, I want to maintain my arguments independent of whether the objects of beliefs are sentences, uncentered, or centered propositions.

  12. A similar definition is found in Pettigrew (2016, p. 3): “My proposal is that the accuracy of a credence function for a particular agent in a particular situation is given by its proximity to the credence function that is ideal or perfect or vindicated in that situation. If a proposition is true in a situation, the ideal credence for an agent in that situation is the maximal credence, which is represented as 1. On the other hand, if a proposition is false, the ideal credence in it is the minimal credence, which is represented as 0”.

  13. For another example, Leitgeb and Pettigrew (2010b, Sect. 6.2) argues for Conditionalization, i.e. the norm that a rational reasoner ought to update her credences after learning some new evidence using Bayesian conditionalization (see fn. 1).

  14. The ideal set of beliefs would be treated as a standard of rationality because it would always be favorably compared to any other reasoner. Not all frameworks in EUT use the ideal set of beliefs as a standard of rationality. For two exceptions, see Leitgeb and Pettigrew (2010a); Easwaran (2013).

  15. Caie (2013) attacks Joyce’s argument for Probabilism using an “obvious truth” that is related to bs1. He argues that a rational reasoner is guaranteed to be probabilistically incoherent given that she is moderately sensitive to her credences and has high credence on that obvious truth. Fitelson and Easwaran (2015, p. 86) discusses some solutions to this problem, which depend on presuppositions about the objects of beliefs. Pettigrew (2016, p. 4) restricts his framework to situations without blindspots.

  16. In a gradational notion of truth, \(v(\phi )\) has continuously many values between 0 and 1: 0 when \(\phi\) is absolutely false, 1 when \(\phi\) is absolutely true, and the values between 0 and 1 meaning different degrees of truth. In a gradational notion of belief (credences), \(b(\phi )\) has continuously many values between 0 and 1: 0 for absolute certainty that \(\phi\) is false, 1 for absolute certainty that \(\phi\) is true, and the values between 0 and 1 for other degrees of certainty. Epistemologists (e.g. Pettigrew 2016, ch. 4) sometimes argue that the measures of t and f should have the form of a Brier score for cases involving credences. If you accept these arguments, the following equations (Brier scores) could be used for measuring t and f in the \(\alpha\)-model for the cases involving credences: \(t = \sum _{\phi \in \mathtt {B}} 1 - (v(\phi ) - b(\phi ))^2\) and \(f = \sum _{\phi \in \mathtt {B}} (v(\phi ) - b(\phi ))^2\).

  17. An opinionated reasoner has an opinion (a belief-value) for every member of the agenda. If the agenda is fixed, then all reasoners that are opinionated for that agenda hold credences for the same objects. Consequently, they hold the same number of beliefs. For an exception, Easwaran (2013) works with infinite agendas, but only with local measures of inaccuracy (i.e. that is, measures of the inaccuracy of individual credences). Local measures of inaccuracy could ground a notion of rationality for the holding of particular credences, but not for reasoners in general.

  18. The agenda is always maximal in the sense that it comprises all objects of a kind. If the objects of beliefs are propositions, then no assumption is needed because there are infinitely many propositions. If the objects of beliefs are sentences, then I assume that the relevant languages have enough expressive power (e.g. recursion) for expressing infinitely many truths/falsehoods (see Sect. 3.2).

  19. For this reason, I need two equations: t measures the comprehensiveness of the set of beliefs and f measures its inaccuracy. In comparing reasoners who are opinionated regarding a fixed agenda (and, consequently, have the same number of beliefs), only one (either one) of these two equations is needed (e.g. EUT only uses the measure of f). This feature has to do with a difference in the interpretation of the fulfillment of the truth-goal (see Sect. 4.2).

  20. Independently of whether t and f have continuous values, it would be interesting to require that \(\alpha\) is differentiable at every point (and, consequently, that it is a continuous function). In this case, the first derivative of \(\alpha\) could measure some sort of ‘alethic potential’ (more in the last paragraph of Sect. 3.3).

  21. The maximum of a function is the largest member in its image. Suppose that \(\alpha (t, f) = max (\alpha )\). Requirement r1 entails that \(\alpha (t+1, f) > \alpha (t, f)\) and that \(\alpha (t+1, f) > max (\alpha )\), which is a contradiction. The idea is that a reasoner can always hold an extra true belief and that function \(\alpha\) must react to it.

  22. The supremum of a function \(\alpha\) is the least upper bound of the image of \(\alpha\), defined as a quantity s such that no member in the image of \(\alpha\) exceeds s, but if \(\epsilon\) is any positive quantity, then there is a member in the image of \(\alpha\) that exceeds \(s - \epsilon\). All maxima are suprema, but some suprema are not maxima.

  23. To prevent function \(\alpha\) collapsing to \(- \infty\), we could require it to have an infimum (but not a minimum). The infimum of a function \(\alpha\) is the greatest lower bound of the image of \(\alpha\) and the minimum of a function is the smallest member in its image. I do not want to discuss this requirement here because it is marginal to the investigation of the notion of \(\alpha\)-maximality. Nevertheless, functions \(\alpha 4\) and \(\alpha 5\) have infima that are not minima (0 and -1 respectively). I refer to the infimum of function \(\alpha\) in Sect. 4.2.

  24. Constant c defines the ‘sensitivity’ of the function: the smaller the c, the greater the benefit for having more truth in the set of beliefs and the greater the penalty for having more falsehood in the set of beliefs. A similar notion may be applied to d. An anonymous referee has called my attention to the fact that Laplace’s rule of succession has the form of \(\alpha 4\), where \(d=1\) and \(c=2\) (let’s call this \(\alpha _{L}\)).

  25. Suppose that \(t^{\prime}>t\). Multiplying both sides by \((f+c+f)\), it follows that \(t^{\prime}(f+c+f) > t(f+c+f)\). Distributing, it follows that \(t^{\prime}f + t^{\prime}c + t^{\prime}f > tf + tc + tf\). Adding \((-t^{\prime}f-tf)\) to both sides, it follows that \(t^{\prime}f + t^{\prime}c - tf > tf + tc - t^{\prime}f\). Since \(tf=ft\) and \(t^{\prime}f=ft^{\prime}\) (commutativity), this is equivalent to \(t^{\prime}f + t^{\prime}c - ft > tf + tc - ft^{\prime}\). Adding \((tt^{\prime}-ff-fc)\) to both sides, it follows that \(tt^{\prime} + t^{\prime}f + t^{\prime}c -ft - ff - fc > tt^{\prime} + ft +tc - ft^{\prime} - ff - fc\). Since \(t^{\prime}t = tt^{\prime}\) and \(ft = tf\), this is equivalent to \(t^{\prime}t + t^{\prime}f + t^{\prime}c -ft - ff - fc > tt^{\prime} + tf +tc - ft^{\prime} - ff - fc\) (commutativity). From distribution, it follows that \((t^{\prime}-f)(t+f+c) > (t-f)(t^{\prime}+f+c)\). Dividing both sides by \((t+f+c)(t^{\prime}+f+c)\), it follows that \((t^{\prime}-f)/(t^{\prime}+f+c) > (t-f)/(t+f+c)\).

  26. The value of \((t-f)/(t+f+c)\) decreases only wrt f and strictly increases wrt t. Then \(\lim _{t \rightarrow \infty } (t-0)/(t+0+c) = 1\) is the function’s upper bound because the value of t dominates that of c. Since the value of \((t-f)/(t+f+c)\) strictly increases wrt t and t is unbounded, this upper bound is not a maximum.

  27. There is much work to be done in order to shrink the class of admissible functions \(\alpha\), but I do not think that the outcome of this will be the selection of one admissible function (the function \(\alpha\)). For example, I cannot think of a principled way of specifying the ‘correct’ relative importance between believing the truths and not believing the falsehoods, so as to distinguish, for example, \(\alpha 5\) and \(\alpha 5^{\prime}(t,f)=(2t-f)/(2t+f+c)\).

  28. The use of sentences in this model should not be seen as a commitment to specific objects of beliefs (see fn. 11), but as modeling the objects of beliefs (e.g. propositions). The assumption that agendas are denumerable is substantive because sets of propositions can have higher cardinalities, but it is weaker than the restriction to finite agendas. I will talk informally of propositions as the objects of beliefs.

  29. Informally, reasoners implicitly believe the logical consequences of their explicit beliefs. In the model, reasoners implicitly believe the logical consequence of their belief-set. Informally, reasoners have the accessible belief that \(\phi\) iff they explicitly believe that \(\phi\) after some amount of reasoning (Konolige 1986, p. 19). In the model, the accessible beliefs of a reasoner are the sentences in \(\pi (\texttt {INPUT}, \mathtt {B}, i)\) for some i.

  30. In this case, the supremum of \(\alpha\) would be a maximum. However, with some mathematical juggling, one could argue that the new function fulfills r1-r3 (e.g. that “\(\infty + 1 = \infty\)”, or something similar).

  31. Function \(\pi\) determines a sequence \(\mathtt {B}_0, \mathtt {B}_1, \ldots , \mathtt {B}_i, \ldots\), where \(\mathtt {B}_0 = \mathtt {B}\) is the reasoner’s initial belief-set and \(\mathtt {B}_{i+1} = \pi (\texttt {INPUT}_{i+1}, \mathtt {B}_i, i+1)\). This sequence could be used to represent the reasoner’s reasoning sequence, although nothing in the following depends on this choice. The condition ‘if her cognitive resources were sufficient’ is necessary (for finite reasoners) because reasoning sequences are infinite sequences.

  32. The notion of stable belief is related to that of P-stability in Leitgeb (2014, p. 140), where a belief is P-stable when it is sufficiently probable given any compatible proposition that is available for the reasoner (e.g. as evidence). There are two differences, though. The first is that P-stability is concerned with the relation between full beliefs and credences. The second is that the notion of stable beliefs considers the order in which these other propositions are considered (which is crucial in dealing with complex blindspots). In the model, the stable beliefs of a reasoner \({\mathcal{R}}\) with reasoning sequence \(\mathtt {B}_0, \mathtt {B}_1, \ldots , \mathtt {B}_i, \ldots\) are in the set \(\mathtt {B}_{\omega } = \bigcup _{i} \bigcap _{j \ge i} \mathtt {B}_j\). If \(\mathtt {B}_{\omega }\) is infinite, \(\alpha (\mathtt {B}_{\omega })\) is undefined. That’s why stable beliefs provide only a “very rough” interpretation of the measure \(\lim _{i \rightarrow \infty } \alpha (\mathtt {B}_i)\), which is not the same as \(\alpha (\mathtt {B}_{\omega })\).

  33. This is the notion of psychological certainty: “A belief is psychologically certain when the subject who has it is supremely convinced of its truth. Certainty in this sense is similar to incorrigibility, which is the property a belief has of being such that the subject is incapable of giving it up” (Reed 2011, p. 2). A reasoner is incapable of giving up her beliefs\(_\omega\) in the sense that these are the beliefs that she ‘in fact’ does not give them up even in the face of all available evidence.

  34. For example, a reasoner who completely ignores her \(\texttt {INPUT}\)s and has reasoning sequence \(\mathtt {B}_0 = \varnothing , \mathtt {B}_1 = \lbrace \phi \vee \lnot \phi \rbrace , \mathtt {B}_2 = \lbrace \phi \vee \lnot \phi , (\phi \vee \lnot \phi ) \vee \lnot (\phi \vee \lnot \phi ) \rbrace\), etc is \(\alpha\)-maximal without believing\(_{\omega }\) her blindspots, but this strategy for approaching \(\alpha\)-maximality is problematic with regard to finite reasoners (see Sect. 3.3).

  35. I am assuming that it is possible for \({\mathcal{R}}^*\) to form infinitely many true beliefs\(_\omega\) (and, consequently, to approach \(\alpha\)-maximality) from the available evidence in such a perfect epistemic situation. That assumption would not hold if, for example, the situation were entirely composed of (true) blindspots for \({\mathcal{R}}^*\). However, such would be a case of skeptical rather than of a perfect epistemic situation (see fn. 38).

  36. Davidson (1965, p. 387) argues that a finite reasoner can only learn a language if it is constructive, in the sense of having compositional syntax and semantics. Davidson himself requires those languages to contain finitely many semantic primitives, but we only need to require that it is r.e. (see Haack 1978).

  37. The \(\texttt {INPUT}_i\) of \({\mathcal{R}}^*\) are composed of at most one sentence. \({\mathcal{R}}^*\)’s initial belief-set is empty and all her other \(\mathtt {B}^*_i\), which are such that \(max(\vert \mathtt {B}_{i+1} \vert ) = \vert \mathtt {B}_{i} \vert + 1\), are also finite. \({\mathcal{R}}^*\)’s function \(\pi\) is recursive because it executes only finitely many basic operations at each stage \(i+1\): \({\mathcal{R}}^*\) reviews finitely many beliefs in \(\mathtt {B}^*_i\), withdraws finitely many beliefs from \(\mathtt {B}^*_{i+1}\), then considers at most one input in \(\mathtt {INPUT}_{i+1}\) and adds at most one belief to \(\mathtt {B}^*_{i+1}\).

  38. We could talk about a regular epistemic situation, where \({\mathcal{R}}\) can form infinitely many true and infinitely many false beliefs\(_\omega\) from the evidence, and of a skeptical situation, where it is possible for \({\mathcal{R}}\) to form infinitely many false, but at most finitely many true beliefs\(_\omega\) from the evidence. It is possible to approach \(\alpha\)-maximality in a regular epistemic situation, but there is no infallible way to do so (from the evidence). It is impossible to approach \(\alpha\)-maximality (from the evidence) in a skeptical situation.

  39. In order to satisfy some ‘ought’, a (finite) reasoner must adopt or withdraw beliefs (one at a time), but each change in her set of beliefs generates a new set of ‘oughts’ that are ‘active’ for her. This dynamical character of n8-n9 circumvents the problem of complex blindspots (compare with that of n3).

  40. Suppose that \(\mathtt {B}\) has an amount t of truth (including \(\psi\)), an amount f of falsehood, and that the truth-value of the other beliefs remains fixed when the belief that \(\phi\) is adopted/withdrawn. Since \(\mathtt {B} + \phi\) has the same amount \(t+x-x=t\) of truth as \(\mathtt {B} - \phi\) (\(+x\) for \(\phi\), \(-x\) for \(\psi\)) and an amount \(f+y > f\) of falsehood larger than \(\mathtt {B} - \phi\) (\(+y\) for \(\psi\)), it is false that \(\alpha (\mathtt {B} + \phi ) > \alpha (\mathtt {B} - \phi )\). Then \({\mathcal{R}}\) does not ought to believe \(\phi\). Norms n4, n4’, n5, and n5’ (see fn. 9) are correct when \({\mathcal{R}}\) starts with \(\phi\), but if \({\mathcal{R}}\) starts with \(\psi\), then \(\phi\) would still be truly-believable/true were \({\mathcal{R}}\) to believe that \(\phi\).

  41. A norm cannot prescribe belief for all components of complex blindspots because it would be prescribing belief for the blindspot as an easy logical consequence of its components. The norm cannot prescribe the absence of belief for all components because it would fail to prescribe belief for harmless truths.

  42. Consider Pollock’s notion of ideal warrant: “Ideal warrant has to do with what a reasoner should believe if it could produce all possible relevant arguments and then survey them” (Pollock 1995, p. 133). In some sense, \({\mathcal{R}}^*\) produces all the relevant conclusions because she has access to all and only the true evidence and forms beliefs accordingly. In this sense, \(\mathtt {B}^*_\omega\) is the set of true beliefs that are warranted in a perfect epistemic situation (which is closely related to ideal warranty). The reasoner \({\mathcal{R}}^*\) can be seen as an ‘epistemic counterpart’ of a given rational regular reasoner \({\mathcal{R}}\). Then \(\mathtt {B}^*_\omega\) can also be seen as a counterfactual expansion of \({\mathcal{R}}\)’s pattern of inference: these are the beliefs \({\mathcal{R}}\) would hold if she had sufficient cognitive resources, was in a perfect epistemic situation, and fulfilled the conditions ii-iv in Sect. 3.2.

  43. Consider the \(\mathtt {B}_\omega\) of a reasoner \({\mathcal{R}}\) who differs from \({\mathcal{R}}^*\) only because \({\mathcal{R}}\) is in an imperfect epistemic situation (e.g. \({\mathcal{R}}\) in Sect. 3.3). Norms defined in terms of \(\mathtt {B}_\omega\) simply lack alethic import. Since we don’t know much about \({\mathcal{R}}\)’s epistemic situation, any belief can be in \(\mathtt {B}_\omega\) (excluding those related to blindspots and contradictions).

  44. For example, if \(t=2\) and \(f=1\), \(\alpha 4(t, f) \approx .6667\) and \(\alpha 5(t, f) \approx .3333\) (where cd are close to 0). See footnote 27.

  45. A rational reasoner \({\mathcal{R}}\) should not believe that bs1 for the reasons stated above. The proposition \(\lnot bs1\) is equivalent to the proposition that \({\mathcal{R}}\) believes that bs1. Then, for a rational reasoner, \(\lnot bs1\) should be false and \(\alpha (\mathtt {B} + \lnot bs1) < \alpha (\mathtt {B} - \lnot bs1)\). In general, if \(bs1 \not \in \mathtt {B}\) and \(\lnot bs1 \not \in \mathtt {B}\), then \(\alpha (\mathtt {B} + bs1) = \alpha (\mathtt {B} + \lnot bs1)< \alpha (\mathtt {B} + bs1, \lnot bs1) < \alpha (\mathtt {B})\).

  46. This is the problem of bs1 with the “coherence condition”: “The agent doesn’t accept p [bs1] because he recognises that if he does accept p, and continues to be a good reasoner, then he is in a position to run a modus ponens argument to not-p and thereby come to recognise unconditionally that the proposition he accepts is in fact false” (Kroon 1993, p. 383). The other three arguments in Kroon (1993) are on page 382 and in his footnote 8.

  47. For some factors, the more true beliefs a reasoner holds, the less attractive it is for her to believe the propositions in an inconsistent set; the larger the proportion of truth in the set, the more attractive it is for her to believe the propositions in the set; etc. For example, consider the inconsistent set \(\lbrace \phi , \lnot \phi \rbrace\), function \(\alpha 5\) with c close to 0 and a \(\mathtt {B}\) that does not contain \(\phi\) or \(\lnot \phi\). If \(\mathtt {B}\) is such that \(t=1\) and \(f=0\), then \(\alpha (\mathtt {B} \cup \lbrace \phi , \lnot \phi \rbrace ) < \alpha (\mathtt {B} \smallsetminus \lbrace \phi , \lnot \phi \rbrace )\); but if \(t=0\) and \(f=1\), then \(\alpha (\mathtt {B} \cup \lbrace \phi , \lnot \phi \rbrace ) > \alpha (\mathtt {B} \smallsetminus \lbrace \phi , \lnot \phi \rbrace )\).

  48. This result also supports our intuition that it is epistemically less defective to hold a large body of beliefs that turns out to be inconsistent than a small inconsistent set of beliefs (e.g. an outright contradiction). For example, we blame Frege much less for subscribing to the inconsistent system in The Foundations of Arithmetic than a reasoner who believes an outright contradiction. My argument here appeals, not to the greater difficulty of spotting the inconsistency, but to alethic considerations alone.

  49. There are cases in which, when both t and f approach infinity, function \(\vec {\alpha }\) returns its upper bound. For example, consider a reasoner \({\mathcal{R}}\) with a reasoning sequence such that, at stage 1, the value of f is 1 and the value of t is such that \(\vec {\alpha }(\mathtt {B}_i \cap \mathtt {B}_\omega ) \approx sup(\alpha ) - 1/2\). In the subsequent stage, the value of f increases by 1 and the value of t is such that \(\vec {\alpha }(\mathtt {B}_i \cap \mathtt {B}_\omega )\) is approximately \(sup(\alpha ) - 1/4\), \(\vec {\alpha }(\alpha ) - 1/8\), etc.

  50. The reflection principle states that you should defer your beliefs to those beliefs that you expect to hold in the future. The measure in Eq. 3 is related to the reflection principle in the sense that all the beliefs that are relevant to your rationality are beliefs that you have in later stages of your reasoning sequence. In some sense, you only have rational beliefs that you hold in later stages of your reasoning sequence. There are arguments in favor of the reflection principle in EUT (e.g. Easwaran 2013).

  51. Computational epistemologists most often consider the mean alethic value (e.g. Trpin and Pellert 2019; Olsson 2011), but it may be interesting to consider other statistical averages, such as the mode (e.g. the “in most trials” of Douven 2013, see fn. 1), the median, etc.

  52. Trpin and Pellert’s measure can be used to deal with non-opinionated agents with a varying number of beliefs because function \(\alpha 3\) fulfills most requirements in Sect. 3.1 (more specifically, r1 and r2), but their measure may collapse to \(\pm \infty\) when dealing with infinite agendas because \(\alpha 3\) does not fulfill r3.

  53. For example, some frameworks in EUT measure the (in)accuracy of a regular set of beliefs as its distance to the ideal set of beliefs (Fitelson and Easwaran 2015; Pettigrew 2016). This distance may collapse to \(+ \infty\) when dealing with infinite agendas. For this reason, these frameworks are often restricted to finite agendas (e.g. Leitgeb and Pettigrew 2010a; Fitelson and Easwaran 2015; Pettigrew 2016), where Easwaran (2013) is an exception. Method m1, on the other hand, does not exhibit this problem.

  54. Douven (2013) exploits these different interpretations (see fn. 1).

References

  • Boghossian, Paul. (2003). The normativity of content. Philosophical Issues, 13(1), 31–45.

    Article  Google Scholar 

  • Bykvist, Krister, & Hattiangadi, Anandi. (2007). Does thought imply ought? Analysis, 67(296), 277–285.

    Article  Google Scholar 

  • Bykvist, Krister, & Hattiangadi, Anandi. (2013). Belief, truth, and blindspots, Chapter 6. In T. Chan (Ed.), The aim of belief (pp. 100–122). Oxford: Oxford University Press.

    Chapter  Google Scholar 

  • Caie, Michael. (2013). Rational probabilistic incoherence. Philosophical Review, 122(4), 527–575.

    Article  Google Scholar 

  • Carr, Jennifer. (2020). Should you believe the truth? http://philosophyfaculty.ucsd.edu/~j2carr/research.html.

  • Dantas, Danilo. (2017). No rationality through brute-force. Philosophy South (Filosofia Unisinos), 18(3), 195–200.

    Google Scholar 

  • Davidson, Donald. (1965). Theories of meaning and learnable languages. In Proceedings of the international congress for logic, methodology, and philosophy of science. Bar-Hillel Y. (Ed.) Amsterdam: North-Holland, pp. 3–17.

  • Douven, Igor. (2013). Inference to the best explanation, Dutch books, and inaccuracy minimisation. The Philosophical Quarterly, 63(252), 428–444.

    Article  Google Scholar 

  • Easwaran, Kenny. (2013). Expected accuracy supports conditionalization - and conglomerability and reflection. Philosophy of Science, 80(1), 119–142.

    Article  Google Scholar 

  • Egan, Andy, & Elga, Adam. (2005). I can't believe I'm stupid. Philosophical Perspectives, 19(1), 77–93.

    Article  Google Scholar 

  • Fitelson, Branden, & Easwaran, Kenny. (2015). Accuracy, coherence and evidence. In Gendler T. & Hathorne J. (Eds.), Oxford studies in epistemology (Vol. 5, pp. 61–96). Oxford: Oxford University Press.

    Google Scholar 

  • Frankfurt, Harry. (2005). On bullshit. Princeton: Princeton University Press.

    Book  Google Scholar 

  • Goldman, Alvin. (1999). Knowledge in a social world. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Goldstein, Michael. (1983). The prevision of a prevision. Journal of the American Statistical Association, 78(384), 817–819.

    Article  Google Scholar 

  • Haack, R. J. (1978). Davidson on learnable languages. Mind, 87(346), 230–249.

    Article  Google Scholar 

  • Harman, Gilbert. (1986). Change in view: principles of reasoned revision. Cambridge: The MIT Press.

    Google Scholar 

  • Joyce, James. (1998). A nonpragmatic vindication of probabilism. Philosophy of Science, 65(4), 575–603.

    Article  Google Scholar 

  • Kolmogorov, Andrei. (1950). Foundations of probability. New York: Chelsea Publishing Company.

    Google Scholar 

  • Konolige, Kurt. (1986). A deduction model of belief. San Francisco: Morgan Kaufmann Publishers Inc.

    Google Scholar 

  • Kroon, Frederick. (1993). Rationality and epistemic paradox. Synthese, 94(3), 377–408.

    Article  Google Scholar 

  • Kyburg, H. (1970). Conjunctivitis. Induction (pp. 55–82). Springer, Netherlands: Acceptance and Rational Belief. Amsterdam.

  • Leitgeb, Hannes. (2014). The stability theory of belief. The Philosophical Review, 123(2), 131–171.

    Article  Google Scholar 

  • Leitgeb, Hannes, & Pettigrew, Richard. (2010a). An objective justification of Bayesianism I: measuring inaccuracy. Philosophy of Science, 77(2), 201–235.

    Article  Google Scholar 

  • Leitgeb, Hannes, & Pettigrew, Richard. (2010b). An objective justification of Bayesianism II: the consequences of minimizing inaccuracy. Philosophy of Science, 77(2), 236–272.

    Article  Google Scholar 

  • Makinson, David. (1965). The paradox of the preface. Analysis, 25, 205–207.

    Article  Google Scholar 

  • Olsson, Erik. (2011). A simulation approach to veritistic social epistemology. Episteme, 8(2), 127–143.

    Article  Google Scholar 

  • Pettigrew, Richard. (2016). Accuracy, chance, and the laws of credence. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Pettigrew, Richard. (2019a). Epistemic Utility Arguments for Probabilism. In: The Stanford Encyclopedia of Philosophy. Zalta E.N. (Ed.). Winter 2019. Metaphysics Research Lab, Stanford University.

  • Pettigrew, Richard. (2019b). Veritism, epistemic risk, and the swamping problem. Australasian Journal of Philosophy, 97(4), 761–774.

    Article  Google Scholar 

  • Pollock, John. (1995). Cognitive carpentry: a blueprint for how to build a person. Cambridge: The MIT Press.

    Book  Google Scholar 

  • Raleigh, Thomas. (2013). Belief norms and blindspots. Southern Journal of Philosophy, 51(2), 243–269.

    Article  Google Scholar 

  • Reed, Baron. (2011). “Certainty”. In The Stanford Encyclopedia of Philosophy. Zalta, E. N. (Ed.). Winter 2011. Metaphysics Research Lab, Stanford University.

  • Sorensen, Roy. (1988). Blindspots. Oxford: Oxford University Press.

    Google Scholar 

  • Steele, Katie, & Stefansson, Orri. (2016). “Decision Theory”. In The Stanford Encyclopedia of Philosophy. Zalta, E.N. (Ed.). Winter 2016. Metaphysics Research Lab, Stanford University.

  • Treanor, Nick. (2013). The measure of knowledge. Noûs, 47(3), 577–601.

    Article  Google Scholar 

  • Trpin, Borut, & Pellert, Max. (2019). Inference to the best explanation in uncertain evidential situations. British Journal for the Philosophy of Science, 70(4), 977–1001.

    Article  Google Scholar 

  • Wedgwood, Ralph. (2015). Doxastic correctness. Aristotelian Society Supplementary, 87(1), 217–234.

    Article  Google Scholar 

  • Whiting, Daniel. (2010). Should I believe the truth? Dialectica, 64(2), 213–224.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Danilo Fraga Dantas.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dantas, D.F. How to (Blind)Spot the Truth: An Investigation on Actual Epistemic Value. Erkenn 88, 693–720 (2023). https://doi.org/10.1007/s10670-021-00377-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10670-021-00377-x

Navigation