Skip to main content
Log in

How much are bold Bayesians favoured?

  • Original Research
  • Published:
Synthese Aims and scope Submit manuscript

Abstract

Rédei and Gyenis recently displayed strong constraints of Bayesian learning (in the form of Jeffrey conditioning). However, they also presented a positive result for Bayesianism. Despite the limited significance of this positive result, I find it useful to discuss its two possible strengthenings to present new results and open new questions about the limits of Bayesianism. First, I will show that one cannot strengthen the positive result by restricting the evidence to so-called “certain evidence”. Secondly, strengthening the result by restricting the partitions—as parts of one’s evidence—to Jeffrey-independent partitions requires additional constraints on one’s evidence to preserve its commutativity. So, my results provide additional grounds for caution and support for the limitations of Bayesian learning.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. It may help to say that, to me, the idea of a conservative learner bears a lot of similarity to an ur-conditionaliser with a stable ur-prior; e.g., see Meacham (2016) for a definition and a thorough discussion on ur-conditionalisation. I focus primarily on a bold learner in this reply, so I will not consider this similarity any further, but I wanted to mention it as a point of reference.

  2. This is a common understanding of evidence in the context of Jeffrey conditioning, but there is also an alternative understanding (e.g., see Field, 1978; Wagner, 2002).

  3. There are two caveats. First, assume that an agent learns \(\{{\mathcal {E}},q_{{\mathcal {E}}}\}\) for which \(q_{{\mathcal {E}}}(E_{i=k})=1\) and \(q_{{\mathcal {E}}}(E_{i\ne k})=0\) for all \(E_{i}\in {\mathcal {E}}\). If \(E_{i=k}\) is a singleton, then the ratios are trivially broken: the posterior probability of the single element in \(E_{i=k}\) is 1 and every other element of \(\Omega \) gets the probability of 0. Secondly, if \(E_{i=k}\) is not a singleton, the ratios are also trivially broken for \(\omega \in \Omega \) that have been excluded by learnt evidence since their probability is now 0.

References

  • Billingsley, P. (1995). Probability and measure (3rd ed.). Wiley.

    Google Scholar 

  • Diaconis, P., & Zabell, L. (1982). Updating subjective probability. Journal of the American Statistical Association, 17(380), 822–830.

    Article  Google Scholar 

  • Field, H. (1978). A note on Jeffrey conditionalization. Philosophy of Science, 45(3), 361–367.

    Article  Google Scholar 

  • Lange, M. (2000). Is Jeffrey conditionalization defective by virtue of being non-commutative? Remarks on the sameness of sensory experiences. Synthese, 123(3), 393–403.

    Article  Google Scholar 

  • Meacham, C. J. G. (2016). Ur-priors, conditionalization, and Ur-prior conditionalization. Ergo, 3(17), 444–489.

    Google Scholar 

  • Rédei, M., & Gyenis, Z. (2017). General properties of Bayesian learning as statistical inference determined by conditional expectations. Review of Symbolic Logic, 10(4), 719–755.

    Article  Google Scholar 

  • Rédei, M., & Gyenis, Z. (2021). Having a look at the Bayes blind spot. Synthese, 198, 3801–3832.

    Article  Google Scholar 

  • Rosenthal, J. S. (2006). A first look at rigorous probability theory (3rd ed.). World Scientific Publishing Co.

    Book  Google Scholar 

  • Tao, T. (2011).An introduction to measure theory. AMS.

  • Wagner, C. G. (2002). Probability kinematics and commutativity. Philosphy of Science, 69(2), 266–278.

    Article  Google Scholar 

  • Weisberg, J. (2009). Commutativity or holism? A dilemma for conditionalizers. The British Journal for the Philosophy of Science, 60(4), 393–403.

    Article  Google Scholar 

  • Williams, D. (1991). Probability with martingales (1st ed.). CUP.

    Book  Google Scholar 

Download references

Acknowledgements

I am grateful to my colleagues from the Czech Academy of Sciences, the LoPSE group at the University of Gdańsk, and the University of Bristol for their comments on various stages of my paper. Thanks to Miklós Rédei and Zalán Gyenis for answering my questions about their paper. Thanks to anonymous referees for their very useful comments, to the editors, and to Jonáš Gray for linguistic advice.

Funding

I confirm that the work on this paper was supported by the Formal Epistemology – the Future Synthesis grant, in the framework of the Praemium Academicum programme of the Czech Academy of Sciences. These funding sources have no involvement in study design; in the collection, analysis and interpretation of data; in the writing of the report; and in the decision to submit the article for publication.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pavel Janda.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

A Proofs for Section 3 (Limitations of the two-step strategy)

Proposition 1

Let a prior p, \(\{{\mathcal {E}},q_{{\mathcal {E}}}\}\), and \(E_{i=k}\in {\mathcal {E}}\) be given. If \(q_{{\mathcal {E}}}(E_{i=k})=1\) and \(E_{i=k}\) is not a singleton, then ratios in Eq. 3 established by a faithful prior p of a bold Bayesian agent remain constant for any \(\omega _{i},\omega _{j}\in E_{i=k}\).

Proof

Assume that a bold agent learns \(\{{\mathcal {E}},q_{{\mathcal {E}}}\}\) such that \(q_{{\mathcal {E}}}(E_{i=k})=1\) and \(q_{{\mathcal {E}}}(E_{i\ne k})=0\) for all \(E_{i}\in {\mathcal {E}}\). Assume that \(E_{i=k}\) is not a singleton (see Footnote 3) and \(p(E_{i=k})\ne 0\). Consequently, \(\nicefrac {q_{{\mathcal {E}}}(E_{i=k})}{p(E_{i=k})}=\nicefrac {1}{p(E_{i=k})}\) and \(\nicefrac {q_{{\mathcal {E}}}(E_{i\ne k})}{p(E_{i\ne k})}=0\). By additivity, I can discuss elements of any \(E_{i}\). The prior probability of every \(\omega \in E_{i\ne k}\) will be multiplied by 0, and I can ignore it. The prior probability of every \(\omega \in E_{i=k}\) will be multiplied by a scalar \(\nicefrac {1}{p(E_{i=k})}\). Assume that \(\omega _{i},\omega _{j}\in E_{i=k}\), then the ratio of priors, \(p(\{\omega _{i}\})\) and \(p(\{\omega _{j}\})\), will be the same as the ratio of posteriors, \(q(\{\omega _{i}\})\) and \(q(\{\omega _{j}\}\):

$$\begin{aligned} \frac{q(\{\omega _{i}\})}{q(\{\omega _{j}\}}=\frac{\frac{1}{p(E_{i=k})}p(\{\omega _{i}\})}{\frac{1}{p(E_{i=k})}p(\{\omega _{j}\})}=\frac{p(\{\omega _{i}\})}{p(\{\omega _{j}\})}. \end{aligned}$$

Assume that q becomes the agent’s new prior. Further, assume that she learns new certain evidence \(\{{\mathcal {F}},r_{{\mathcal {F}}}\}\), i.e., \(r_{{\mathcal {F}}}(F_{i=k})=1\), and \(F_{i=k}\) is not a singleton. Further, assume that \(\omega _{i},\omega _{j}\in F_{i=k}\) and let r be a posterior after updating q on \(\{{\mathcal {F}},r_{{\mathcal {F}}}\}\). Following the strategy discussed earlier (about updating p on \(\{{\mathcal {E}},q_{{\mathcal {E}}}\}\)), I can show that \(\nicefrac {r(\{\omega _{i}\})}{r(\{\omega _{j}\})}=\nicefrac {q(\{\omega _{i}\})}{q(\{\omega _{j}\})}\), which means that \(\nicefrac {r(\{\omega _{i}\})}{r(\{\omega _{j}\})}=\nicefrac {p(\{\omega _{i}\})}{p(\{\omega _{j}\})}\), etc. \(\square \)

Claim 1

Given Definition 2, \({\mathcal {E}}\) and \({\mathcal {F}}\) in Example 3 and Example 4 are not Jeffrey-independent.

Proof

By Definition 2, Jeffrey independence needs to hold for all i and j. So, it is enough to consider a single example of \(F_{j}\) that violates Jeffrey independence to show that \({\mathcal {E}}\) and \({\mathcal {F}}\) are not Jeffrey-independent. Consider \(F_{0}=\{\omega _{1},\omega _{2}\}\in {\mathcal {F}}\). In Example 3, \(p^{{\mathcal {E}}}\) is q. So, for Jeffrey independence to hold, one needs \(p^{{\mathcal {E}}}(F_{0})=q(F_{0})=p(F_{0})\). But, by additivity, one has that \(p(F_{0})=p(\{\omega _{1}\})+p(\{\omega _{2}\})=\nicefrac {3}{4}\) and \(q(F_{0})=q(\{\omega _{1}\})+q(\{\omega _{2}\})=\nicefrac {1}{2}\ne \nicefrac {3}{4}\). So, Jeffrey independence is violated. \(\square \)

Lemma 1

Let a prior p, \(\{{\mathcal {E}},q_{{\mathcal {E}}}\}\), \(\{{\mathcal {F}},r_{{\mathcal {F}}}\}\), Definition 2, and Theorem 1 be given. If \({\mathcal {E}}\) and \({\mathcal {F}}\) are Jeffrey-independent with respect to p, \(q_{{\mathcal {E}}}\), and \(r_{{\mathcal {F}}}\), i.e., \(p^{{\mathcal {E}}{\mathcal {F}}} = p^{{\mathcal {F}}{\mathcal {E}}}\), then \(p(E_{3})=q_{{\mathcal {E}}}(E_{3})\) and \(p(F_{0})=r_{{\mathcal {F}}}(F_{0})\).

Proof

Let me have \((\Omega ,{\mathcal {S}})\). Assume that p is one’s faithful prior, and the agent learns uncertain evidence \(\{{\mathcal {E}},q_{{\mathcal {E}}}\}\) and \(\{{\mathcal {F}},r_{{\mathcal {F}}}\}\) with Jeffrey-independent \({\mathcal {E}}\) and \({\mathcal {F}}\). Then, by Definition 2, one has that \(p^{{\mathcal {E}}}(F_{j})=p(F_{j})\) and \(p^{{\mathcal {F}}}(E_{i})=p(E_{i})\) holds for all i and j. By assumption, any \(F_{j},E_{i}\in {\mathcal {S}}\), so I can take any \(F_{j}\) or \(E_{i}\) to be B.

  1. 1.

    Since \({\mathcal {E}}\) and \({\mathcal {F}}\) come from (4), \(F_{j}\) can be either a singleton \(F_{j}=\{\omega _{j}\}\) (for \(j=1,\dots ,n\)) or \(F_{0}=\{a,b\}\). So, the condition \(p^{{\mathcal {E}}}(F_{j})=p(F_{j})\) gives either \(p^{{\mathcal {E}}}(\{\omega _{j}\})=p(\{\omega _{j}\})\) or \(p^{{\mathcal {E}}}(\{a,b\})=p(\{a,b\})\). If \(F_{j}=\{\omega _{j}\}\), then \(\omega _{j}\in E_{3}\). Assuming that \(p^{{\mathcal {E}}}(\{\omega _{j}\})=p(\{\omega _{j}\})\) and Equation 1 hold, one has that:

    $$\begin{aligned} p^{{\mathcal {E}}}(\{\omega _{j}\})=\frac{q_{{\mathcal {E}}}(E_{3})}{p(E_{3})} \,p(\{\omega _{j}\})\;\;\text {so}\;\;p(E_{3})=q_{{\mathcal {E}}}(E_{3}). \end{aligned}$$
    (5)
  2. 2.

    Since \({\mathcal {E}}\) and \({\mathcal {F}}\) come from (4), \(E_{i}\) can be \(E_{1}=\{a\}\), \(E_{2}=\{b\}\), or \(E_{3}\). Assume that \(E_{i}=\{a\}\) or \(E_{i}=\{b\}\). One knows that \(a,b\in F_{0}\) since \(F_{0}=\{a,b\}\). If \(E_{i}=E_{1}=\{a\}\), then, by the assumption that \(p^{{\mathcal {F}}}(E_{i})=p(E_{i})\) and Equation 1, one has that:

    $$\begin{aligned} p^{{\mathcal {F}}}(\{a\})=\frac{r_{{\mathcal {F}}}(F_{0})}{p(F_{0})}\,p(\{a\})\;\;\text {so}\;\; p(F_{0})=r_{{\mathcal {F}}}(F_{0}). \end{aligned}$$
    (6)

    If \(E_{i}=E_{2}=\{b\}\), then, by \(p^{{\mathcal {F}}}(E_{i})=p(E_{i})\) and Equation 1, one has that:

    $$\begin{aligned} p^{{\mathcal {F}}}(\{b\})=\frac{r_{{\mathcal {F}}}(F_{0})}{p(F_{0})}\,p(\{b\})\;\;\text {so}\;\; p(F_{0})=r_{{\mathcal {F}}}(F_{0}). \end{aligned}$$
    (7)

\(\square \)

Claim 2

If Lemma 1 holds, then \(q_{{\mathcal {E}}}(E_{3})+r_{{\mathcal {F}}}(F_{0})=1\).

Proof

By additivity, \(p(\{a,b\})=p(\{a\})+p(\{b\})=p(E_{1})+p(E_{2})\). One can now write \(p(E_{3})+p(E_{1})+p(E_{2})=p(E_{3})+p(\{a,b\})=1\). Since \(\{a,b\}=F_{0}\), one has \(p(E_{3})+p(F_{0})=1\). By Lemma 1, \(p(E_{3})=q_{{\mathcal {E}}}(E_{3})\) and \(p(F_{0})=r_{{\mathcal {F}}}(F_{0})\), so \(q_{{\mathcal {E}}}(E_{3})+r_{{\mathcal {F}}}(F_{0})=1\). \(\square \)

Claim 3

If Lemma 1 and Claim 2 hold, then \(p(F_{0})=q_{{\mathcal {E}}}(F_{0})=r_{{\mathcal {F}}}(F_{0})\) and \(p(E_{3})=q_{{\mathcal {E}}}(E_{3})=r_{{\mathcal {F}}}(E_{3})\).

Proof

One knows that \(q_{{\mathcal {E}}}(E_{1})+q_{{\mathcal {E}}}(E_{2})+q_{{\mathcal {E}}}(E_{3})=1\). That is, \(q_{{\mathcal {E}}}(\{a\})+q_{{\mathcal {E}}}(\{b\})+q_{{\mathcal {E}}}(E_{3})=1\). By additivity, \(q_{{\mathcal {E}}}(\{a,b\})=q_{{\mathcal {E}}}(\{a\})+q_{{\mathcal {E}}}(\{b\})\). So, I can write that \(q_{{\mathcal {E}}}(\{a,b\})+q_{{\mathcal {E}}}(E_{3})=1\). This means that, \(q_{{\mathcal {E}}}(E_{3})=1-q_{{\mathcal {E}}}(\{a,b\})\). Since \(\{a,b\}=F_{0}\), one has \(q_{{\mathcal {E}}}(E_{3})=1-q_{{\mathcal {E}}}(F_{0})\). By Claim 2, \(q_{{\mathcal {E}}}(E_{3})+r_{{\mathcal {F}}}(F_{0})=1\). So, one has that \(1-q_{{\mathcal {E}}}(F_{0})+r_{{\mathcal {F}}}(F_{0})=1\). Thus, \(r_{{\mathcal {F}}}(F_{0})=q_{{\mathcal {E}}}(F_{0})\). By Lemma 1, \(p(F_{0})=r_{{\mathcal {F}}}(F_{0})=q_{{\mathcal {E}}}(F_{0})\)

One knows that \(r_{{\mathcal {F}}}(F_{0})+r_{{\mathcal {F}}}(F_{1})+\dots +r_{{\mathcal {F}}}(F_{n})=1\). That is, \(r_{{\mathcal {F}}}(\{a,b\})+r_{{\mathcal {F}}}(\{\omega _{1}\})+\dots +r_{{\mathcal {F}}}(\{\omega _{n}\})=1\). Now, by additivity, \(r_{{\mathcal {F}}}(F_{1})+\dots +r_{{\mathcal {F}}}(F_{n})=r_{{\mathcal {F}}}(\{\omega _{1},\dots ,\omega _{n}\})\). So, \(r_{{\mathcal {F}}}(F_{0})+r_{{\mathcal {F}}}(\{\omega _{1},\dots \omega _{n}\})=1\). But one also knows that \(\{\omega _{1},\dots \omega _{n}\}=E_{3}\). So, \(r_{{\mathcal {F}}}(F_{0})+r_{{\mathcal {F}}}(E_{3})=1\). By Claim 2, \(r_{{\mathcal {F}}}(F_{0})=1-q_{{\mathcal {E}}}(E_{3})\). Finally, this gives \(1-q_{{\mathcal {E}}}(E_{3})+r_{{\mathcal {F}}}(E_{3})=1\), and so \(r_{{\mathcal {F}}}(E_{3})=q_{{\mathcal {E}}}(E_{3})\). By Lemma 1, \(p(E_{3})=q_{{\mathcal {E}}}(E_{3})=r_{{\mathcal {F}}}(E_{3})\). \(\square \)

Claim 4

If Lemma 1 and Claim 3 hold, then:

$$\begin{aligned} \frac{p(F_{0})}{p(E_{3})}=\frac{r_{{\mathcal {F}}}(F_{0})}{q_{{\mathcal {E}}}(E_{3})} =\frac{q_{{\mathcal {E}}}(F_{0})}{q_{{\mathcal {E}}}(E_{3})}=\frac{r_{{\mathcal {F}}}(F_{0})}{r_{{\mathcal {F}}}(E_{3})}. \end{aligned}$$

Proof

By Lemma 1 and simple arithmetic operations, one has that:

$$\begin{aligned} \frac{q_{{\mathcal {E}}}(E_{3})}{p(E_{3})}=\frac{r_{{\mathcal {F}}}(F_{0})}{p(F_{0})} \;\;\;\text {so}\;\;\; \frac{p(F_{0})}{p(E_{3})}=\frac{r_{{\mathcal {F}}}(F_{0})}{q_{{\mathcal {E}}}(E_{3})}. \end{aligned}$$

So, by Claim 3, I can write that:

$$\begin{aligned} \frac{p(F_{0})}{p(E_{3})}=\frac{r_{{\mathcal {F}}}(F_{0})}{q_{{\mathcal {E}}}(E_{3})} =\frac{q_{{\mathcal {E}}}(F_{0})}{q_{{\mathcal {E}}}(E_{3})}=\frac{r_{{\mathcal {F}}}(F_{0})}{r_{{\mathcal {F}}}(E_{3})}. \end{aligned}$$

\(\square \)

Claim 5

Given Claim 3, if Jeffrey conditioning in Example 3 is commutative with respect to \(\{{\mathcal {E}},q_{{\mathcal {E}}}\}\) and \(\{{\mathcal {F}},r_{{\mathcal {F}}}\}\), then the posterior r is unreachable from the prior p with the two-step strategy.

Proof

In Example 3, one has \({\mathbf {r}}=[\nicefrac {1}{16},\nicefrac {2}{16},\nicefrac {3}{16},\nicefrac {4}{16},\nicefrac {5}{16},\nicefrac {1}{16}]\) and \({\mathbf {p}}=[\nicefrac {1}{2},\nicefrac {1}{4},\nicefrac {1}{8},\nicefrac {1}{16},\nicefrac {1}{32},\nicefrac {1}{32}]\). Given \({\mathbf {p}}\), by Claim 3, one has that \(p(E_{3})=q_{{\mathcal {E}}}(E_{3})=r_{{\mathcal {F}}}(E_{3})=\nicefrac {1}{4}\). So, by additivity, credences in \(\{\omega _{3}\}, \{\omega _{4}\}, \{\omega _{5}\}\), and \(\{\omega _{6}\}\) (whose union forms \(E_{3}\)) must sum to \(\nicefrac {1}{4}\). But, as indicated in \({\mathbf {r}}\), the final credences in \(\{\omega _{3}\}, \{\omega _{4}\}, \{\omega _{5}\}\), and \(\{\omega _{6}\}\) should be \(\nicefrac {3}{16},\nicefrac {4}{16},\nicefrac {5}{16}\), and \(\nicefrac {1}{16}\), respectively. This means that \(\nicefrac {3}{16}+\nicefrac {4}{16}+\nicefrac {5}{16}+\nicefrac {1}{16}=\nicefrac {13}{16}>\nicefrac {1}{4}\). \(\square \)

Proposition 2

Let a prior p, \(\{{\mathcal {E}},q_{{\mathcal {E}}}\}\), and \(\{{\mathcal {F}},r_{{\mathcal {F}}}\}\) be given. If Claim 2, Claim 3, and Claim 4 hold, then the bold Bayes (p, 2)-Blind Spot is infinitely large and has at least continuum cardinality.

Proof

Assume that p is a faithful prior. By Claim 4, \(\nicefrac {p(F_{0})}{p(E_{3})}=\nicefrac {r_{{\mathcal {F}}}(F_{0})}{q_{{\mathcal {E}}}(E_{3})}\). Now, given one’s prior p, \(\nicefrac {p(F_{0})}{p(E_{3})}\) will equal a constant c, i.e., \(\nicefrac {p(F_{0})}{p(E_{3})}=c\). So, \(\nicefrac {r_{{\mathcal {F}}}(F_{0})}{q_{{\mathcal {E}}}(E_{3})}=c\). Then, easily, one has that \(r_{{\mathcal {F}}}(F_{0})=cq_{{\mathcal {E}}}(E_{3})\). By Claim 2, \(r_{{\mathcal {F}}}(F_{0})=1-q_{{\mathcal {E}}}(E_{3})\). So, \(1-q_{{\mathcal {E}}}(E_{3})=cq_{{\mathcal {E}}}(E_{3})\). Thus, \(q_{{\mathcal {E}}}(E_{3})=\nicefrac {1}{(1+c)}\). Since \(q_{{\mathcal {E}}}(E_{3})+r_{{\mathcal {F}}}(F_{0})=1\), one has that \(r_{{\mathcal {F}}}(F_{0})=\nicefrac {c}{(1+c)}\). By Claim 3, it follows that \(r_{{\mathcal {F}}}(F_{0})=q_{{\mathcal {E}}}(F_{0})=\nicefrac {c}{(1+c)}\) and \(q_{{\mathcal {E}}}(E_{3})=r_{{\mathcal {F}}}(E_{3})=\nicefrac {1}{(1+c)}\).

One knows that \(F_{0}=\{a,b\}\) and the complement of \(F_{0}\) is \(\{\omega _{1},\dots ,\omega _{n}\}\). By additivity, in two rounds of Jeffrey updating, a bold Bayesian agent cannot reach from her prior p any posterior r such that \(r(\{a\})+r(\{b\})\ne \nicefrac {c}{(1+c)}\) and \(r(\{\omega _{1}\})+\dots +r(\{\omega _{n}\})\ne \nicefrac {1}{(1+c)}\). This amounts to an infinite number of unreachable posteriors. The set of unreachable posteriors (those that do not meet \(r(\{a\})+r(\{b\})\ne \nicefrac {c}{(1+c)}\) and \(r(\{\omega _{1}\})+\dots +r(\{\omega _{n}\})\ne \nicefrac {1}{(1+c)}\)) has at least continuum cardinality. \(\square \)

B Proofs for Section 4 (Generalisations and discussion)

Lemma 2

Let a prior p, \(\{{\mathcal {K}},g_{{\mathcal {K}}}\}\) and \(\{{\mathcal {P}},l_{{\mathcal {P}}}\}\) be given such that \(P_{i}=K_{j}=\{\omega _{i}\}\). If \(p^{{\mathcal {P}}{\mathcal {K}}} = p^{{\mathcal {K}}{\mathcal {P}}}\), then \(p^{{\mathcal {P}}}(\{\omega _{i}\})=p^{{\mathcal {P}}{\mathcal {K}}}(\{\omega _{i}\})\) and \(p^{{\mathcal {K}}}(\{\omega _{i}\})=p^{{\mathcal {K}}{\mathcal {P}}}(\{\omega _{i}\})\).

Proof

Assume that \(\{\omega _{i}\}\) is a singleton in both \({\mathcal {P}}\) and \({\mathcal {K}}\), i.e., \(\{\omega _{i}\}\in {\mathcal {P}}\) and \(\{\omega _{i}\}\in {\mathcal {K}}\). For example, \({\mathcal {P}}\) and \({\mathcal {K}}\) could look as follows:

$$\begin{aligned} {\mathcal {P}}&=\Big \{P_{1}=\{\omega _{1}\}, P_{2}=\{\omega _{2}\},\dots , P_{i}=\{\omega _{i}\},\dots ,P_{n-1} =\{\omega _{n-1},\omega _{n}\}\Big \}\\ {\mathcal {K}}&=\Big \{K_{0}=\{\omega _{1},\omega _{2}\}, K_{1}=\{\omega _{3}\},\dots , K_{j} =\{\omega _{i}\},\dots ,K_{n}=\{\omega _{n}\} \Big \}. \end{aligned}$$

Let then \(P_{i}=K_{j}=\{\omega _{i}\}\) and assume that p is a faithful prior of a bold agent. Since \(\{\omega _{i}\}\in {\mathcal {S}}\), one can take \(\{\omega _{i}\}\) as one’s B (see Eq. 1). Assume the agent updates \(p(\{\omega _{i}\})\) on \(\{{\mathcal {P}},l_{{\mathcal {P}}}\}\), specifically, \(P_{i}=\{\omega _{i}\}\); see Observation 2 for updating on singleton sets.

$$\begin{aligned} p^{{\mathcal {P}}}(\{\omega _{i}\})=\frac{l_{{\mathcal {P}}}(\{\omega _{i}\})}{p(\{\omega _{i}\})}\,p(\{\omega _{i}\})\;\;\;\;\text {so}\;\;\;\;p^{{\mathcal {P}}}(\{\omega _{i}\}) =l_{{\mathcal {P}}}(\{\omega _{i}\}). \end{aligned}$$
(8)

Now, assume that the agent updates \(p(\{\omega _{i}\})\) on \(\{{\mathcal {K}},g_{{\mathcal {K}}}\}\), specifically, \(K_{j}=\{\omega _{i}\}\)

$$\begin{aligned} p^{{\mathcal {K}}}(\{\omega _{i}\})=\frac{g_{{\mathcal {K}}}(\{\omega _{i}\})}{p(\{\omega _{i}\})}\,p(\{\omega _{i}\})\;\;\;\;\text {so}\;\;\;\;p^{{\mathcal {K}}}(\{\omega _{i}\})=g_{{\mathcal {K}}}(\{\omega _{i}\}). \end{aligned}$$
(9)

If \({\mathcal {K}}\) and \({\mathcal {P}}\) are Jeffrey-independent, then, by Definition 2, \(p^{{\mathcal {P}}}(K_{j})=p(K_{j})\) and \(p^{{\mathcal {K}}}(P_{i})=p(P_{i})\) holds for all i and j. For \(P_{i}=K_{j}=\{\omega _{i}\}\), one has that \(p^{{\mathcal {P}}}(\{\omega _{i}\})=p(\{\omega _{i}\})\) and \(p^{{\mathcal {K}}}(\{\omega _{i}\})=p(\{\omega _{i}\})\). This means that \(p^{{\mathcal {P}}}(\{\omega _{i}\})=p^{{\mathcal {K}}}(\{\omega _{i}\})\).

By Condition 1, the agent is allowed to first update with \(\{{\mathcal {K}},g_{{\mathcal {K}}}\}\) and then \(\{{\mathcal {P}},l_{{\mathcal {P}}}\}\), or the other way around. For the sake of argument, assume that the bold agent first updates p on \(\{{\mathcal {P}},l_{{\mathcal {P}}}\}\). Her new prior after the first update is \(p^{\mathcal {P}}\). Now, assume that the agent updates \(p^{\mathcal {P}}\) on \(\{{\mathcal {K}},g_{{\mathcal {K}}}\}\), specifically, \(K_{j}=\{\omega _{i}\}\):

$$\begin{aligned} p^{{\mathcal {P}}{\mathcal {K}}}(\{\omega _{i}\})=\frac{g_{{\mathcal {K}}} (\{\omega _{i}\})}{p^{\mathcal {P}}(\{\omega _{i}\})}\,p^{\mathcal {P}}(\{\omega _{i}\})\;\;\;\;\text {so}\;\;\;\;p^{{\mathcal {P}} {\mathcal {K}}}(\{\omega _{i}\})=g_{{\mathcal {K}}}(\{\omega _{i}\}). \end{aligned}$$
(10)

By (9) and (10), \(p^{{\mathcal {P}}{\mathcal {K}}}(\{\omega _{i}\})=p^{{\mathcal {K}}}(\{\omega _{i}\})\). But since \(p^{{\mathcal {P}}}(\{\omega _{i}\})=p^{{\mathcal {K}}}(\{\omega _{i}\})\), one has that \(p^{{\mathcal {P}}{\mathcal {K}}}(\{\omega _{i}\})=p^{{\mathcal {P}}}(\{\omega _{i}\})\). One could switch the order of \(\{{\mathcal {K}},g_{{\mathcal {K}}}\}\) and \(\{{\mathcal {P}},l_{{\mathcal {P}}}\}\) to get \(p^{{\mathcal {K}}{\mathcal {P}}}(\{\omega _{i}\})=p^{{\mathcal {K}}}(\{\omega _{i}\})\). \(\square \)

Lemma 3

Let a prior p, \(\{{\mathcal {P}},l_{{\mathcal {P}}}\}\), and \(\{{\mathcal {K}},g_{{\mathcal {K}}}\}\) be given. If \(p^{{\mathcal {P}}{\mathcal {K}}} = p^{{\mathcal {K}}{\mathcal {P}}}\), then \(p(P^*)=l_{{\mathcal {P}}}(P^*)\) and \(p(K^*)=g_{{\mathcal {K}}}(K^*)\).

Proof

Assume that p is one’s faithful prior and the agent learns non-trivial uncertain evidence \(\{{\mathcal {P}},l_{{\mathcal {P}}}\}\) and \(\{{\mathcal {K}},g_{{\mathcal {K}}}\}\). By Condition 3, there is \(\omega _{j}\) such that \(\{\omega _{j}\}\in {\mathcal {K}}\) and \(\omega _{j}\in P^*\), where \(P^*\in {\mathcal {P}}\) is not a singleton. Similarly, there is \(\omega _{i}\) such that \(\{\omega _{i}\}\in {\mathcal {P}}\) and \(\omega _{i}\in K^*\), where \(K^*\in {\mathcal {K}}\) is not a singleton. Assume that one proceeds with the singleton \(K_{j}=\{\omega _{j}\}\in {\mathcal {K}}\) such that \(\omega _{j}\in {\mathcal {P}}^*\). By Definition 2, \(p^{{\mathcal {P}}}(K_{j})=p(K_{j})\) gives \(p^{{\mathcal {P}}}(\{\omega _{j}\})=p(\{\omega _{j}\})\). So, using Eq. 1, one has:

$$\begin{aligned} p(\{\omega _{j}\})=p^{{\mathcal {P}}}(\{\omega _{j}\})=\frac{l_{{\mathcal {P}}}(P^*)}{p(P^*)} \,p(\{\omega _{j}\})\;\;\;\;\text {so}\;\;\;\;p(P^*)=l_{{\mathcal {P}}}(P^*). \end{aligned}$$

With the identical proof strategy, one can prove an analogous result if one takes a singleton set in \(\{\omega _{i}\}\in {\mathcal {P}}\) and a non-singleton \(K^*\in {\mathcal {K}}\). \(\square \)

Lemma 4

Let a prior p, \(\{{\mathcal {P}},l_{{\mathcal {P}}}\}\), and \(\{{\mathcal {K}},g_{{\mathcal {K}}}\}\) be given. If Lemma 3 holds, then \(p^{{\mathcal {P}}}(P^*)=p(P^*)\) and \(p^{{\mathcal {K}}}(K^*)=p(K^*)\).

Proof

Condition 1 says that every evidence is commutative with every other evidence. In other words, the order of evidence can be permuted such that any evidence can be the first evidence the agent uses in the updating process. From Lemma 3, one knows that \(P^*\in {\mathcal {P}}\) and \(K^*\in {\mathcal {K}}\) are non-singleton sets such that \(p(P^*)=l_{{\mathcal {P}}}(P^*)\) and \(p(K^*)=g_{{\mathcal {K}}}(K^*)\). Let me focus only on \(P^*\). By assumption, \(P^*\in {\mathcal {S}}\), and so let \(B=P^*\). One knows that \(p(P^*\cap P^*)=p(P^*)\) and, by Lemma 3, \(p(P^*)=l_{{\mathcal {P}}}(P^*)\). So, by Equation 1, it holds that:

$$\begin{aligned} \begin{aligned} p^{{\mathcal {P}}}(P^*)&=\frac{l_{{\mathcal {P}}}(P^*)}{p(P^*)}\,p(P^*\cap P^*)\\&=p(P^*) \end{aligned} \end{aligned}$$
(11)

An analogous reasoning holds for \(K^*\) and proving that \(p^{{\mathcal {K}}}(K^*)=p(K^*)\). \(\square \)

Proposition 3

If Lemma 4 holds, then the infinite bold Bayes Blind Spot is infinitely large and has at least continuum cardinality.

Proof

By Condition 1, a bold Bayesian agent can use any evidence first in the updating process. So, by Lemma 4, any posterior, e.g., \(p^{{\mathcal {P}}}\), will be bounded by the original prior p. That is, there will be a non-singleton set \(\{\omega _{1},\dots ,\omega _{n}\}=B\in {\mathcal {S}}\) (I have previously called such a set \(P^*\in {\mathcal {P}}\) or \(K^*\in {\mathcal {K}}\)) for which, by additivity, the following equality will hold:

$$\begin{aligned} p^{{\mathcal {P}}}(\{\omega _{1}\})+\dots +p^{{\mathcal {P}}}(\{\omega _{n}\})=p(\{\omega _{1}\})+\dots +p(\{\omega _{n}\}). \end{aligned}$$

So, a part of any posterior \(p^{{\mathcal {P}}}\) will be determined by the values originally given by the prior p. That is, no posterior credence function \(p^{{\mathcal {P}}}\) can be such that \(p^{{\mathcal {P}}}(\{\omega _{1}\})+\dots +p^{{\mathcal {P}}}(\{\omega _{n}\})\ne p(\{\omega _{1}\})+\dots +p(\{\omega _{n}\})\). Consequently, the probability \(p^{{\mathcal {P}}}(B^c)\) that a bold Bayesian agent can assign to the complement \(B^c\) of B must be such that \(p^{{\mathcal {P}}}(B^c)=1-p^{{\mathcal {P}}}(\{\omega _{1},\dots ,\omega _{n}\})\). Any posteriors which do not meet those equalities cannot be reached in the Bayesian updating process under consideration. This, however, amounts to an infinite number of posteriors. Similarly to Proposition 3 and Example 5, this set of unreachable posteriors will have at least continuum cardinality. \(\square \)

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Janda, P. How much are bold Bayesians favoured?. Synthese 200, 336 (2022). https://doi.org/10.1007/s11229-022-03825-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11229-022-03825-5

Keywords

Navigation