Kolmogorov's account in his  of an absolute probability space presupposes given a Boolean algebra, and so does Rényi's account in his  and  of a relative probability space. Anxious to prove probability theory ‘autonomous’. Popper supplied in his  and  accounts of probability spaces of which Boolean algebras are not and  accounts of probability spaces of which fields are not prerequisites but byproducts instead.1 I review the accounts in question, showing how Popper's issue from and how (...) they differ from Kolmogorov's and Rényi's, and I examine on closing Popper's notion of ‘autonomous independence’. So as not to interrupt the exposition, I allow myself in the main text but a few proofs, relegating others to the Appendix and indicating as I go along where in the literature the rest can be found. (shrink)
Teddy Seidenfeld recently claimed that Kolmogorov's probability theory transgresses the Substitutivity Law. Underscoring the seriousness of Seidenfeld's charge, the author shows that (Popper's version of) the law, to wit: If (∀ D)(Pr(B,D)=Pr(C,D)), then Pr(A,B)=Pr(A,C), follows from just C1. 0≤ Pr(A,B)≤ 1 C2. Pr(A,A)=1 C3. Pr(A & B,C)=Pr(A,B & C)× Pr(B,C) C4. Pr(A & B,C)=Pr(B & A,C) C5. Pr(A,B & C)=Pr(A,C & B), five constraints on Pr of the most elementary and most basic sort.
Consider a language SL having as its primitive signs one or more atomic statements, the two connectives ‘∼’ and ‘&,’ and the two parentheses ‘’; and presume the extra connectives ‘V’ and ‘≡’ defined in the customary manner. With the statements of SL substituting for sets, and the three connectives ‘∼,’ ‘&,’and ‘V’ substituting for the complementation, intersection, and union signs, the constraints that Kolmogorov places in  on probability functions come to read:K1. 0 ≤ P,K2. P) = 1,K3. If (...) ⊦ ∼, then P = P + P,K4. If ⊦ A ≡ B, then P = P.2. (shrink)
Provided here is a characterisation of absolute probability functions for intuitionistic (propositional) logic L, i.e. a set of constraints on the unary functions P from the statements of L to the reals, which insures that (i) if a statement A of L is provable in L, then P(A) = 1 for every P, L's axiomatisation being thus sound in the probabilistic sense, and (ii) if P(A) = 1 for every P, then A is provable in L, L's axiomatisation being thus (...) complete in the probabilistic sense. As there are theorems of classical (propositional) logic that are not intuitionistic ones, there are unary probability functions for intuitionistic logic that are not classical ones. Provided here because of this is a means of singling out the classical probability functions from among the intuitionistic ones. (shrink)
Shown here is that a constraint used by Popper in The Logic of Scientific Discovery (1959) for calculating the absolute probability of a universal quantification, and one introduced by Stalnaker in "Probability and Conditionals" (1970, 70) for calculating the relative probability of a negation, are too weak for the job. The constraint wanted in the first case is in Bendall (1979) and that wanted in the second case is in Popper (1959).