Skip to main content
Log in

What to do with a forecast?

  • Published:
Synthese Aims and scope Submit manuscript

Abstract

In the literature one finds two non-equivalent responses to forecasts; deference and updating. Herein it is demonstrated that, under certain conditions, both responses are entirely determined by one’s beliefs as regards the calibration of the forecaster. Further it is argued that the choice as to whether to defer to, or update on, a forecast is determined by the aim of the recipient of that forecast. If the aim of the recipient is to match their credence with the prevailing objective chances, they should defer to the forecast; if it is to maximize the veritistic value of their beliefs, they should update on the forecast.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. The same schema applies for qualified testimony save that the truth of what is forecast is settled by a past or prevailing state of affairs.

  2. Along the way I shall offer suggestions for how to generalize the plausible answers to this question to forecasts where the forecast probability is a closed sub-interval of the unit interval and where \(\alpha \) has received multiple forecasts.

  3. Consequently, this paper stands as a refutation of Williamson’s (2013, p. 20) claim that ‘subjective Bayesianism and empirically-based subjective Bayesianism have difficulty in justifying the use of a forecaster’s probabilities for decision making’.

  4. The signal detection camp—see e.g. Gu and Wallsten (2001), Ferrel and McGoey (1980)—takes forecaster \(\sigma \) to be well calibrated on \({\mathbb {T}}\) if, for \(x_{i}\in \{0,0.1,0.2,\ldots ,1\}\) or some other partition of the unit interval, the proportion of correct forecasts with probability \(x_{i}\) made by \(\sigma \) is close to \(x_{i}\). For each \(x_{i}\), this proportion is a function of the frequencies of forecasts at \(x_{i}\) and \(x_{i-1}\) by \(\sigma \) that are correct and incorrect and a prior probability in \(\sigma \) making a correct forecast that is \(\frac{1}{m}\) for an \(m\)-choice task.

  5. As forecasters do not forecast every proposition belonging to \({\mathbb {T}}\) with every logically possible probability for every \(x\) in [0,1] in their finite life times, let alone do so sufficiently often to establish a meaningful relative frequency, the frequencies determining their calibration must be taken to be hypothetical/conceptual.

  6. Further analysis of this type is given in Keren (1991); along with a study of the interpretational issues that have been neglected in the empirical studies of calibration, such as the meaningfulness of comparing subjective credence with objective relative frequencies, associated reference class problems, the difference between forecast probabilities and the forecaster’s credence in their prediction, the issue of whether calibration is a good measure of the quality of a forecast or whether it is merely one component of such a measure (e.g., the Brier Score), etc.

  7. By varying the propositions belonging to the topic one can increase or decrease the difficulty of the task of deciding their truth or falsity and by varying the participants and the topics one can study the effect of specialist knowledge and expertise on calibration. As an example of the latter one could take the aforementioned study (Keren 1987) showing that bridge players are much better calibrated in bridge gaming forecasts than the general populace.

  8. The intuition that this might be so, if such is widespread, would go a long way to explaining why so much focus has been on just this aspect of the forecaster’s competence decried in (Yates (1982), pp. 148–151).

  9. Morris’ treatment differs from that herein in that it is not restricted to a finite propositional language.

  10. Proof: If any doubt exists as to the infallibility of the forecaster then \(0<\langle {\mathcal {F}}_{\sigma }^{{\mathbb {T}}}\rangle (x)<1\).

  11. Proof: \(\langle {\mathcal {F}}_{\sigma }^{{\mathbb {T}}}\rangle (0.5)=0.5\), so by the Sharp Empirical Deference Principle \(C^{t+1}(A)_{F_{\sigma }^{A,0.5}}=0.5\). Substituting 0.5 for the expected calibrations in the Sharp Update Function gives \(C_{\alpha }^{t+1}(A)_{F_{\sigma }^{A,0.5}}=C_{\alpha }^t(A)\).

  12. Proof: Where \(C_{\alpha }^t(A)=0.5\), all terms in the update function cancel to leave \(C_{\alpha }^{t+1}(A)_{F_{\sigma }^{A,x}}=\langle {\mathcal {F}}_{\sigma }^{{\mathbb {T}}}\rangle _{\alpha }^{t}(x)\), which is the Sharp Empirical Deference Principle.

  13. A settled proposition is one whose objective chance is trivial.

References

  • Björkman, M. (1992). Knowledge, calibration, and resolution: A linear model. Organizational Behaviour and Human Decision Processes, 51, 1–21.

    Article  Google Scholar 

  • Brier, G. W. (1950). Verification of forecasts expressed in terms of probability. Monthly Weather Review, 78, 1–3.

    Article  Google Scholar 

  • Cox, D. R. (1958). Two further applications of a model of binary regression. Biometrika, 45, 562–565.

    Google Scholar 

  • Dawid, A. P. (1982). The well-calibrated Bayesian. Journal of the American Scientific Association, 77, 605–613.

    Article  Google Scholar 

  • Ferrel, W. R., & McGoey, P. J. (1980). A model of calibration for subjective probabilities. Organizational Behaviour and Human Performance, 26, 32–53.

    Article  Google Scholar 

  • Goldman, A. I. (1999). Knowledge in a social world. Oxford: Clarendon Press.

    Book  Google Scholar 

  • Gu, H., & Wallsten, S. (2001). On setting responce criteria for calibrated subjective probability estimates. Journal of Mathematical Psychology, 45, 551–563.

    Article  Google Scholar 

  • Joyce, J. M. (2009). Accuracy and coherence: Prospects for an alethic epistemology of partial belief. In F. Huber & C. Schmidt-Petri (Eds.), Degrees of belief. Synthese library. Berlin: Springer.

  • Kahneman, N., & Tversky, A. (1982). Variants of uncertainty. Cognition, 11, 143–157.

    Article  Google Scholar 

  • Keren, G. (1987). Facing uncertainty in the game of bridge: A calibration study. Organizational Behaviour and Human Decision Processes, 39, 98–114.

    Article  Google Scholar 

  • Keren, G. (1991). Calibration and probability judgements: Conceptual and methodological issues. Acta Psychologica, 77, 217–273.

    Article  Google Scholar 

  • Lichtenstein, S., & Fischhoff, B. (1977). Do those who know more also know more about how much they know? Organizational Behaviour and Human Performance, 20, 159–183.

    Article  Google Scholar 

  • Lichtenstein, S., & Fischhoff, B. (1980). Training for calibration. Organizational Behaviour and Human Performance, 26, 149–171.

    Article  Google Scholar 

  • Meacham, C. J. G. (2010). Two mistakes regarding the principal principle. British Journal of Philosophy of Science, 61, 407–431.

    Article  Google Scholar 

  • Morris, P. A. (1974). Analysis expert use. Management Science, 20(9), 1233–1241.

    Article  Google Scholar 

  • Murphy, A. H. (1973). New vector partition of the probability score. Journal of Applied Meterology, 12, 595–600.

    Article  Google Scholar 

  • Ronis, D. I., & Yates, J. F. (1987). Component of probability judgement accuracy: Individual consistency and the effects of subject matter and assessment method. Organizational Behaviour and Human Decision Processes, 40, 193–218.

    Article  Google Scholar 

  • Skyrms, B. (1988). Conditional chance. In J. Fetzer (Ed.), Probabilistic causation: Essays in honor of Wesley C. Salmon. Dordrecht: Reidel.

  • Wagenaar, W. A., & Keren, G. (1986). Does the expert know? The reliability of predictions and confidence ratings of experts. In E. Hollnagel & D. Woods (Eds.), Intelligent decision aids in process environments. Berlin: Springer.

    Google Scholar 

  • Williamson, J. (2010). In defence of objective Bayesianism. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Williamson, J. (2013). How uncertain do we need to be? Erkenntnis. doi:10.1007/10670-013-9516-6.

  • Wright, G., & Wishuda, A. (1982). Distribution of probability assessments for almanac and future events questions. Journal of Psychology, 23, 219–224.

    Google Scholar 

  • Yates, J. F. (1982). External correspondence: Decompositions of the mean probability score. Organizational Behaviour and Human Performance, 30, 132–156.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to George Masterton.

Appendix 1

Appendix 1

1.1 (a) Derivation of the deference principles

  • Sharp Empirical Deference Principle: \(C_{\alpha }^{t+1} (A)_{\langle F_{\sigma }^{A,x}\rangle }=\left\langle {\mathcal {F}}_{\sigma }^{{\mathbb {T}}}\right\rangle _{\alpha }^t (x)\).

The principles required to derive this deference principle are:

Principle of conditionalization: \(C^{t+1}(A)_{E}=C^{t}(A|E)\)

Miller’s Principle: \(C(A|ch(A)=x)=x\), where \(ch\) is the objective chance function and \(C\) is initial or \(ch(A)=x\) screens off all other evidence for \(A\) in \(C\),

Conditional continuous expansion theorem:

$$\begin{aligned} P_{1}(A)=\int \limits _{0}^1 P_{1}(A|P_{2}(A|B)=x, B)P_{1}(P_{2}(A|B)=x|B)dx, \end{aligned}$$

Conditionalizing out: \(P_{1}(P_{2}(A|B)=x|B)=P_{1}(P_{2}(A|B)=x)\).

Direct Inference \(^{**}\): If \(A\) is a value of a variable in \({\mathbb {T}}\), then

$$\begin{aligned} \left\langle ch\left( A\Big |F_{\sigma }^{A,x}\right) \right\rangle _{\alpha }^t=\left\langle {\mathcal {F}}_{\sigma }^{{\mathbb {T}}}\right\rangle _{\alpha }^t (x). \end{aligned}$$

One also requires two theorems of probability theory:

  • Conditional continuous expansion theorem:

    $$\begin{aligned} P_{1}(A)=\int \limits _{0}^1 P_{1}(A|P_{2}(A|B)=x, B)P_{1}(P_{2}(A|B)=x|B)dx. \end{aligned}$$
  • Skyrms (1988) theorem : \(P_{1}(A|B,P_{2}(A|B)=x)=x\), if \(P_{1}(A|P_{2}(A)=x)=x\).

By the principle of conditionalization:

$$\begin{aligned} C_{\alpha }^{t+1} (A)_{F_{\sigma }^{A,x}}=C_{\alpha }^t \left( A\Big |F_{\sigma }^{A,x}\right) . \end{aligned}$$
(24)

By the conditional continuous expansion theorem:

$$\begin{aligned} C_{\alpha }^{t+1} (A)_{F_{\sigma }^{A,x}}\!=\!\int \limits _{0}^1C_{\alpha }^t\left( A\Big |ch\left( A\Big |F_{\sigma }^{A,x}\right) \!=\!y,F_{\sigma }^{A,x}\right) C_{\alpha }^t\left( ch\left( A\Big |F_{\sigma }^{A,x}\right) \!=\!y\Big |F_{\sigma }^{A,x}\right) dy.\qquad \end{aligned}$$

By conditionalizing out:

$$\begin{aligned} C_{\alpha }^{t+1} (A)_{F_{\sigma }^{A,x}}=\int \limits _{0}^1C_{\alpha }^t\left( A\Big |ch\left( A\Big |F_{\sigma }^{A,x}\right) =y,F_{\sigma }^{A,x}\right) C_{\alpha }^t\left( ch\left( A\Big |F_{\sigma }^{A,x}\right) =y\right) dy. \end{aligned}$$

By the Miller’s Principle with Skyrm’s theorem applied:

$$\begin{aligned} C_{\alpha }^{t+1} (A)_{\left\langle F_{\sigma }^{A,x}\right\rangle }=\int \limits _{0}^1 y C_{\alpha }^t\left( ch\left( A\Big |F_{\sigma }^{A,x}\right) =y\right) dy=\left\langle ch\left( A\Big |F_{\sigma }^{A,x}\right) \right\rangle _{\alpha }^t. \end{aligned}$$

As \(A\) is a value of a variable in \(T\), Direct Inference\(^{**}\) applies, giving the Sharp Empirical Deference Principle:

$$\begin{aligned} C_{\alpha }^{t+1} (A)_{\left\langle F_{\sigma }^{A,x}\right\rangle }=\left\langle {\mathcal {F}}_{\sigma }^{{\mathbb {T}}}\right\rangle _{\alpha }^t (x). \end{aligned}$$

The derivation of

  • Interval Empirical Deference Principle: \(C_{\alpha }^{t+1} (A)_{\langle F_{\sigma }^{A,\varDelta }\rangle }=\widehat{\left\langle {\mathcal {F}}_{\sigma }^{{\mathbb {T}}}\right\rangle _{\alpha }^t}^{\varDelta }\)

follows exactly the same route except \(\varDelta \) is substituted for \(x\) and Direct Inference\(^{\ddagger \ddagger }\) is used in the final step.

1.2 (b) Derivation of the empirical update function

The principles and theorems required for this derivation are the same as those for the empirical deference principle save that one also requires Bayes’ theorem and one requires the following pair of principles rather than the Direct Inference\(^{**}\) principle:

  • Direct Inference \(^{\ddagger \ddagger }\): If \(A\) is a value of a variable in \({\mathbb {T}}\), then

    $$\begin{aligned} \left\langle ch\left( A\Big |F_{\sigma }^{A,\varDelta }\right) \right\rangle _{\alpha }^t= \widehat{\left\langle {\mathcal {F}}_{\sigma }^{{\mathbb {T}}}\right\rangle _{\alpha }^t}^{\varDelta }=\frac{1}{b-a}\int \limits _{a}^b\left\langle {\mathcal {F}}_{\sigma }^{{\mathbb {T}}}\right\rangle _{\alpha }^t (x)dx. \end{aligned}$$
  • Direct Inference \(^{\dagger \dagger }\): If \(A\) is a value of a variable in \({\mathbb {T}}\), then

    $$\begin{aligned} \left\langle ch\left( F_{\sigma }^{A,\varDelta }\Big |F_{\sigma }^{A}\right) \right\rangle _{\alpha }^t=\left\langle {\mathcal {F}}\left( F_{\sigma }^{{\mathbb {T}},\varDelta }\Big |F_{\sigma }^{{\mathbb {T}}}\right) \right\rangle _{\alpha }^t=\int \limits _{a}^{b}\left\langle \lambda _{\sigma }^{{\mathbb {T}}}\right\rangle _{\alpha }^t (x)dx. \end{aligned}$$

This derivation begins with the principle of conditionalization and Bayes theorem.

$$\begin{aligned} C_{\alpha }^{t+1} (A)_{F_{\sigma }^{A,\varDelta }}\!=\!C_{\alpha }^t \left( A\Big |F_{\sigma }^{A,\varDelta }\right) =\frac{C_{\alpha }^t \left( F_{\sigma }^{A,\varDelta }\Big |A\right) C_{\alpha }^t (A)}{C_{\alpha }^t \left( F_{\sigma }^{A,\varDelta }\Big |A\right) C_{\alpha }^t (A)+C_{\alpha }^t \left( F_{\sigma }^{A,\varDelta }\Big |\lnot A\right) C_{\alpha }^t (\lnot A)}. \end{aligned}$$
(25)

Our task is to find useful expressions for the likelihoods: \(C_{\alpha }^t (F_{\sigma }^{A,\varDelta }|A)\) and \(C_{\alpha }^t (F_{\sigma }^{A,\varDelta }|\lnot A)\). To this end we again invoke the conditional expansion theorem, though this time in its discrete form.

$$\begin{aligned} C_{\alpha }^t \left( F_{\sigma }^{A,\varDelta }\Big |A\right)&= C_{\alpha }^t \left( F_{\sigma }^{A,\varDelta }\Big |F_{\sigma }^{A},A\right) C_{\alpha }^t \left( F_{\sigma }^{A}\Big |A\right) \\&+C_{\alpha }^t \left( F_{\sigma }^{A,\varDelta }\Big |\lnot F_{\sigma }^{A},A\right) C_{\alpha }^t \left( \lnot F_{\sigma }^{A}\Big |A\right) \\ C_{\alpha }^t \left( F_{\sigma }^{A,\varDelta }\Big |\lnot A\right)&= C_{\alpha }^t \left( F_{\sigma }^{A,\varDelta }\Big |F_{\sigma }^{A},\lnot A\right) C_{\alpha }^t \left( F_{\sigma }^{A}\Big |\lnot A\right) \\&+C_{\alpha }^t \left( F_{\sigma }^{A,\varDelta }\Big |\lnot F_{\sigma }^{A},\lnot A\right) C_{\alpha }^t \left( \lnot F_{\sigma }^{A}\Big |\lnot A\right) \end{aligned}$$

As \(\lnot F_{\sigma }^{A}\wedge F_{\sigma }^{A,\varDelta }\) is a logical contradiction the above reduce to:

$$\begin{aligned} C_{\alpha }^t \left( F_{\sigma }^{A,\varDelta }\Big |A\right)&= C_{\alpha }^t \left( F_{\sigma }^{A,\varDelta }\Big |F_{\sigma }^{A},A\right) C_{\alpha }^t \left( F_{\sigma }^{A}\Big |A\right) ,\end{aligned}$$
(26)
$$\begin{aligned} C_{\alpha }^t \left( F_{\sigma }^{A,\varDelta }\Big |\lnot A\right)&= C_{\alpha }^t \left( F_{\sigma }^{A,\varDelta }\Big |F_{\sigma }^{A},\lnot A\right) C_{\alpha }^t \left( F_{\sigma }^{A}\Big |\lnot A\right) . \end{aligned}$$
(27)

We now impose the condition that \(C_{\alpha }^t (F_{\sigma }^{A}|A)=C_{\alpha }^t (F_{\sigma }^{A}|\lnot A)=C_{\alpha }^t (F_{\sigma }^{A})\). This is eminently plausible considering that the forecaster can forecast with any probability between 0 and 1. Using this identity to substitute back into (26) and (27), and then substituting these back into (25), delivers (after cancelling terms):

$$\begin{aligned} C_{\alpha }^{t+1} (A)_{F_{\sigma }^{A,\varDelta }}\!=\!\frac{C_{\alpha }^t \left( F_{\sigma }^{A,\varDelta }\Big |F_{\sigma }^{A},A\right) C_{\alpha }^t (A)}{C_{\alpha }^t \left( F_{\sigma }^{A,\varDelta }\Big |F_{\sigma }^{A},A\right) C_{\alpha }^t (A)+C_{\alpha }^t \left( F_{\sigma }^{A,\varDelta }\Big |F_{\sigma }^{A},\lnot A\right) C_{\alpha }^t (\lnot A)}.\qquad \end{aligned}$$
(28)

Now we use the conditional continuous expansion theorem to expand these new likelihoods:

$$\begin{aligned}&\displaystyle C_{\alpha }^t \left( F_{\sigma }^{A,\varDelta }\Big |F_{\sigma }^{A},A\right) =\int \limits _{0}^1 C_{\alpha }^t \left( F_{\sigma }^{A,\varDelta }\Big |ch\left( F_{\sigma }^{A,\varDelta }\Big | F_{\sigma }^{A},A\right) =y, F_{\sigma }^{A}, A\right) \\&\displaystyle C_{\alpha }^t \left( ch\left( F_{\sigma }^{A,\varDelta }\Big |F_{\sigma }^{A},A\right) =y\Big |F_{\sigma }^{A}, A\right) dy,\\&\displaystyle C_{\alpha }^t \left( F_{\sigma }^{A,\varDelta }\Big |F_{\sigma }^{A},\lnot A\right) =\int \limits _{0}^1 C_{\alpha }^t \left( F_{\sigma }^{A,\varDelta }\Big |ch\left( F_{\sigma }^{A,\varDelta }\Big | F_{\sigma }^{A},\lnot A\right) =y, F_{\sigma }^{A}, \lnot A\right) \\&\displaystyle C_{\alpha }^t \left( ch\left( F_{\sigma }^{A,\varDelta }\Big |F_{\sigma }^{A},\lnot A\right) =y\Big |F_{\sigma }^{A}, \lnot A\right) dy. \end{aligned}$$

By the Miller’s Principle, Skyrm’s theorem and conditionalizing out, these expressions simplify to:

$$\begin{aligned} C_{\alpha }^t \left( F_{\sigma }^{A,\varDelta }\Big |F_{\sigma }^{A},A\right)&= \int \limits _{0}^1 y C_{\alpha }^t \left( ch\left( F_{\sigma }^{A,\varDelta }\Big |F_{\sigma }^{A},A\right) =y\right) dy\\&= \left\langle ch\left( F_{\sigma }^{A,\varDelta }\Big |F_{\sigma }^{A},A\right) \right\rangle _{\alpha }^t,\\ C_{\alpha }^t \left( F_{\sigma }^{A,\varDelta }\Big |F_{\sigma }^{A},\lnot A\right)&= \int \limits _{0}^1 y C_{\alpha }^t \left( ch\left( F_{\sigma }^{A,\varDelta }\Big |F_{\sigma }^{A},\lnot A\right) =y\right) dy\\&= \left\langle ch\left( F_{\sigma }^{A,\varDelta }\Big |F_{\sigma }^{A},\lnot A\right) \right\rangle _{\alpha }^t. \end{aligned}$$

By Bayes’ Theorem, the continuous expansion theorem, Direct Inference\(^{\dagger }\), Direct Inference\(^{*}\), Direct Inference\(^{\ddagger \ddagger }\) and Direct Inference\(^{\dagger \dagger }\) (20) and (21) were derived in the text (\(\varDelta =[a,b]\subseteq [0,1]\))

$$\begin{aligned} \left\langle ch\left( F_{\sigma }^{A,\varDelta }\Big |F_{\sigma }^{A}, A\right) \right\rangle _{\alpha }^{t}&= 2\widehat{\left\langle {\mathcal {F}}_{\sigma }^{{\mathbb {T}}}\right\rangle _{\alpha }^{t}}^{\varDelta }\int \limits _{a}^b\left\langle \lambda _{\sigma }^{{\mathbb {T}}}\right\rangle _{\alpha }^{t}(x)dx,\\ \left\langle ch\left( F_{\sigma }^{A,\varDelta }\Big |F_{\sigma }^{A},\lnot A\right) \right\rangle _{\alpha }^{t}&= 2\left( 1-\widehat{\left\langle {\mathcal {F}}_{\sigma }^{{\mathbb {T}}}\right\rangle _{\alpha }^{t}}^{\varDelta }\right) \int \limits _{a}^b\left\langle \lambda _{\sigma }^{{\mathbb {T}}}\right\rangle _{\alpha }^{t}(x)dx. \end{aligned}$$

Making substitutions using (20) and (21) gives:

$$\begin{aligned} C_{\alpha }^t \left( F_{\sigma }^{A,\varDelta }\Big |F_{\sigma }^{A},A\right)&= 2\widehat{\left\langle {\mathcal {F}}_{\sigma }^{{\mathbb {T}}}\right\rangle _{\alpha }^{t}}^{\varDelta }\int \limits _{a}^b\left\langle \lambda _{\sigma }^{{\mathbb {T}}}\right\rangle _{\alpha }^{t}(x)dx,\end{aligned}$$
(29)
$$\begin{aligned} C_{\alpha }^t \left( F_{\sigma }^{A,\varDelta }\Big |F_{\sigma }^{A},\lnot A\right)&= 2\left( 1-\widehat{\left\langle {\mathcal {F}}_{\sigma }^{{\mathbb {T}}}\right\rangle _{\alpha }^{t}}^{\varDelta }\right) \int \limits _{a}^b\left\langle \lambda _{\sigma }^{{\mathbb {T}}}\right\rangle _{\alpha }^{t}(x)dx. \end{aligned}$$
(30)

Substituting these back into (28) and cancelling terms gives the Interval Empirical Update Function:

$$\begin{aligned} C_{\alpha }^{t+1}(A)_{F_{\sigma }^{A,\varDelta }}=\frac{ \widehat{\left\langle {\mathcal {F}}_{\sigma }^{{\mathbb {T}}}\right\rangle _{\alpha }^t}^{\varDelta }C_{\alpha }^{t}(A)}{\widehat{\left\langle {\mathcal {F}}_{\sigma }^{{\mathbb {T}}}\right\rangle _{\alpha }^t}^{\varDelta }C_{\alpha }^{t}(A)+\left( 1-\widehat{\left\langle {\mathcal {F}}_{\sigma }^{{\mathbb {T}}}\right\rangle _{\alpha }^t}^{\varDelta }\right) C_{\alpha }^{t}(\lnot A)}. \end{aligned}$$

Noting that the average value of a function over an interval at the limit where that interval contains a single real is just the value of that function at that real we have the Sharp Empirical Update Function:

$$\begin{aligned} C_{\alpha }^{t+1}(A)_{F_{\sigma }^{A,x}}=\frac{ \left\langle {\mathcal {F}}_{\sigma }^{{\mathbb {T}}}\right\rangle _{\alpha }^t(x)C_{\alpha }^{t}(A)}{\left\langle {\mathcal {F}}_{\sigma }^{{\mathbb {T}}}\right\rangle _{\alpha }^t(x)C_{\alpha }^{t}(A)+\left( 1-\left\langle {\mathcal {F}}_{\sigma }^{{\mathbb {T}}}\right\rangle _{\alpha }^t(x)\right) C_{\alpha }^{t}(\lnot A)}. \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Cite this article

Masterton, G. What to do with a forecast?. Synthese 191, 1881–1907 (2014). https://doi.org/10.1007/s11229-013-0384-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-013-0384-z

Keywords

Navigation