Provided here is a characterisation of absolute probability functions for intuitionistic (propositional) logic L, i.e. a set of constraints on the unary functions P from the statements of L to the reals, which insures that (i) if a statement A of L is provable in L, then P(A) = 1 for every P, L's axiomatisation being thus sound in the probabilistic sense, and (ii) if P(A) = 1 for every P, then A is provable in L, L's axiomatisation being thus (...) complete in the probabilistic sense. As there are theorems of classical (propositional) logic that are not intuitionistic ones, there are unary probability functions for intuitionistic logic that are not classical ones. Provided here because of this is a means of singling out the classical probability functions from among the intuitionistic ones. (shrink)
The logical independence of two statements is tantamount to their probabilistic independence, the latter understood in a sense that derives from stochastic independence. And analogous logical and probabilistic senses of having the same factual content similarly coincide. These results are extended to notions of non-symmetrical independence and independence among more than two statements.
Gentzen's account of logical consequence is extended so as to become a matter of degree. We characterize and study two kinds of function G, where G(X,Y) takes values between 0 and 1, which represent the degree to which the set X of statements (understood conjunctively) logically implies the set Y of statements (understood disjunctively). It is then shown that these functions are essentially the same as the absolute and the relative probability functions described by Carnap.
Shown here is that a constraint used by Popper in The Logic of Scientific Discovery (1959) for calculating the absolute probability of a universal quantification, and one introduced by Stalnaker in "Probability and Conditionals" (1970, 70) for calculating the relative probability of a negation, are too weak for the job. The constraint wanted in the first case is in Bendall (1979) and that wanted in the second case is in Popper (1959).
This paper studies the extent to which probability functions are recursively definable. It proves, in particular, that the (absolute) probability of a statement A is recursively definable from a certain point on, to wit: from the (absolute) probabilities of certain atomic components and conjunctions of atomic components of A on, but to no further extent. And it proves that, generally, the probability of a statement A relative to a statement B is recursively definable from a certain point on, to wit: (...) from the probabilities relative to that very B of certain atomic components and conjunctions of atomic components of A, but again to no further extent. These and other results are extended to the less studied case where A and B are compounded from atomic statements by means of `` ∀ '' as well as `` ∼ '' and "&". The absolute probability functions considered are those of Kolmogorov and Carnap, and the relative ones are those of Kolmogorov, Carnap, Renyi, and Popper. (shrink)
Kolmogorov's account in his  of an absolute probability space presupposes given a Boolean algebra, and so does Rényi's account in his  and  of a relative probability space. Anxious to prove probability theory ‘autonomous’. Popper supplied in his  and  accounts of probability spaces of which Boolean algebras are not and  accounts of probability spaces of which fields are not prerequisites but byproducts instead.1 I review the accounts in question, showing how Popper's issue from and how (...) they differ from Kolmogorov's and Rényi's, and I examine on closing Popper's notion of ‘autonomous independence’. So as not to interrupt the exposition, I allow myself in the main text but a few proofs, relegating others to the Appendix and indicating as I go along where in the literature the rest can be found. (shrink)
Teddy Seidenfeld recently claimed that Kolmogorov's probability theory transgresses the Substitutivity Law. Underscoring the seriousness of Seidenfeld's charge, the author shows that (Popper's version of) the law, to wit: If (∀ D)(Pr(B,D)=Pr(C,D)), then Pr(A,B)=Pr(A,C), follows from just C1. 0≤ Pr(A,B)≤ 1 C2. Pr(A,A)=1 C3. Pr(A & B,C)=Pr(A,B & C)× Pr(B,C) C4. Pr(A & B,C)=Pr(B & A,C) C5. Pr(A,B & C)=Pr(A,C & B), five constraints on Pr of the most elementary and most basic sort.
DEALING INITIALLY WITH QC, THE STANDARD QUANTIFICATIONAL CALCULUS OF ORDER ONE, THE AUTHOR COMMENTS ON A SHORTCOMING, REPORTED IN 1956 BY MONTAGUE AND HENKIN, IN CHURCH'S ACCOUNT OF A PROOF FROM HYPOTHESES, AND SKETCHES THREE WAYS OF RIGHTING THINGS. THE THIRD, WHICH EXPLOITS A TRICK OF FITCH'S, IS THE SIMPLEST OF THE THREE. THE AUTHOR INVESTIGATES IT SOME, SUPPLYING FRESH PROOF OF UGT, THE UNIVERSAL GENERALIZATION THEOREM. THE PROOF HOLDS GOOD AS ONE PASSES FROM QC TO QC asterisk , THE (...) PRESUPPOSITION-FREE VARIANT OF QC. TURNING NEXT TO QC subscript = , THE STANDARD QUANTIFICATIONAL CALCULUS OF ORDER ONE WITH '=', AND TO THE PRESUPPOSITION-FREE VARIANT QC subscript = OF QC subscript = , THE AUTHOR NEXT ESTABLISHES THE LEMMAS NEEDED THERE TO PROVE UGT. THAT, GIVEN FITCH'S ACCOUNT OF A PROOF FROM HYPOTHESES, UGT HOLDS FOR QC subscript = WAS ARGUED IN LEBLANC'S "TRUTH-VALUE SEMANTICS", BUT THE PROOF WAS IN ERROR. (shrink)