In a recent paper, Jeanne Peijnenburg and David Atkinson [ Studia Logica, 89:333-341 ] have challenged the foundationalist rejection of infinitism by giving an example of an infinite, yet explicitly solvable regress of probabilistic justification. So far, however, there has been no criterion for the consistency of infinite probabilistic regresses, and in particular, foundationalists might still question the consistency of the solvable regress proposed by Peijnenburg and Atkinson.
Following Lauwers and Van Liedekerke (1995), this paper explores in a model-theoretic framework the relation between Arrovian aggregation rules and ultraproducts, in order to investigate a source of impossibility results for the case of an infinite number of individuals and an aggregation rule based on a free ultrafilter of decisive coalitions.
This note employs the recently established consistency theorem for infinite regresses of probabilistic justification (Herzberg in Stud Log 94(3):331–345, 2010) to address some of the better-known objections to epistemological infinitism. In addition, another proof for that consistency theorem is given; the new derivation no longer employs nonstandard analysis, but utilises the Daniell–Kolmogorov theorem.
This paper formally explores the common ground between mild versions of epistemological coherentism and infinitism; it proposes—and argues for—a hybrid, coherentist–infinitist account of epistemic justification. First, the epistemological regress argument and its relation to the classical taxonomy regarding epistemic justification—of foundationalism, infinitism and coherentism—is reviewed. We then recall recent results proving that an influential argument against infinite regresses of justification, which alleges their incoherence on account of probabilistic inconsistency, cannot be maintained. Furthermore, we prove that the Principle of Inferential Justification (...) has rather unwelcome consequences—formally resembling the Sorites paradox—as soon as it is iterated and combined with a natural Bayesian perspective on probabilistic inferences. We conclude that strong versions of foundationalism and infinitism should be abandoned. Positively, we provide a rough sketch for a graded formal coherence notion, according to which infinite regresses of epistemic justification will often have more than a minimal degree of coherence. (shrink)
The rejection of an infinitesimal solution to the zero-fit problem by A. Elga () does not seem to appreciate the opportunities provided by the use of internal finitely-additive probability measures. Indeed, internal laws of probability can be used to find a satisfactory infinitesimal answer to many zero-fit problems, not only to the one suggested by Elga, but also to the Markov chain (that is, discrete and memory-less) models of reality. Moreover, the generalization of likelihoods that Elga has in mind is (...) not as hopeless as it appears to be in his article. In fact, for many practically important examples, through the use of likelihoods one can succeed in circumventing the zero-fit problem. 1 The Zero-fit Problem on Infinite State Spaces 2 Elga's Critique of the Infinitesimal Approach to the Zero-fit Problem 3 Two Examples for Infinitesimal Solutions to the Zero-fit Problem 4 Mathematical Modelling in Nonstandard Universes? 5 Are Nonstandard Models Unnatural? 6 Likelihoods and Densities A Internal Probability Measures and the Loeb Measure Construction B The (Countable) Coin Tossing Sequence Revisited C Solution to the Zero-fit Problem for a Finite-state Model without Memory D An Additional Note on ‘Integrating over Densities’ E Well-defined Continuous Versions of Density Functions. (shrink)
The problem of how to rationally aggregate probability measures occurs in particular when a group of agents, each holding probabilistic beliefs, needs to rationalise a collective decision on the basis of a single ‘aggregate belief system’ and when an individual whose belief system is compatible with several probability measures wishes to evaluate her options on the basis of a single aggregate prior via classical expected utility theory. We investigate this problem by first recalling some negative results from preference and judgment (...) aggregation theory which show that the aggregate of several probability measures should not be conceived as the probability measure induced by the aggregate of the corresponding expected utility preferences. We describe how McConway’s :410–414, 1981) theory of probabilistic opinion pooling can be generalised to cover the case of the aggregation of infinite profiles of finitely additive probability measures, too; we prove the existence of aggregation functionals satisfying responsiveness axioms à la McConway plus additional desiderata even for infinite electorates. On the basis of the theory of propositional-attitude aggregation, we argue that this is the most natural aggregation theory for probability measures. Our aggregation functionals for the case of infinite electorates are neither oligarchic nor integral-based and satisfy a weak anonymity condition. The delicate set-theoretic status of integral-based aggregation functionals for infinite electorates is discussed. (shrink)
This article establishes the existence of a definable , countably saturated nonstandard enlargement of the superstructure over the reals. This nonstandard universe is obtained as the union of an inductive chain of bounded ultrapowers . The underlying ultrafilter is the one constructed by Kanovei and Shelah .
The defence of The No Alternatives Argument in a recent paper by R. Dawid, S. Hartmann and J. Sprenger rests on the assumption that the number of acceptable alternatives to a scientific hypothesis is independent of the complexity of the scientific problem. This note proves a generalisation of the main theorem by Dawid, Hartmann and Sprenger, where this independence assumption is no longer necessary. Some of the other assumptions are also discussed, and the limitations of the no-alternatives argument are explored.
Arrow’s axiomatic foundation of social choice theory can be understood as an application of Tarski’s methodology of the deductive sciences—which is closely related to the latter’s foundational contribution to model theory. In this note we show in a model-theoretic framework how Arrow’s use of von Neumann and Morgenstern’s concept of winning coalitions allows to exploit the algebraic structures involved in preference aggregation; this approach entails an alternative indirect ultrafilter proof for Arrow’s dictatorship result. This link also connects Arrow’s seminal result (...) to key developments and concepts in the history of model theory, notably ultraproducts and preservation results. (shrink)
Coherence is a key concept in many accounts of epistemic justification within ‘traditional’ analytic epistemology. Within formal epistemology, too, there is a substantial body of research on coherence measures. However, there has been surprisingly little interaction between the two bodies of literature. The reason is that the existing formal literature on coherence measure operates with a notion of belief system that is very different from—what we argue is—a natural Bayesian formalisation of the concept of belief system from traditional epistemology. Therefore, (...) formal epistemology has so far only been concerned with one particular—arguably not even very natural—way of formalising coherence of belief systems; it has by no means refuted the viability of coherentism. In contrast to the existing literature, we formalise belief systems as families of assignments of (conditional) degrees of belief (which may be compatible with several subjective probability measures). Within this framework, we propose a Bayesian formalisation of the thrust of BonJour’s coherence concept in The structure of empirical knowledge (Harvard University Press, Cambridge, 1985), using a combination of Bayesian confirmation theory and basic graph theory. In excursions, we introduce graded notions for both logical and probabilistic consistency of belief systems—the latter being based on certain geometrical structures induced by probabilistic belief systems. For illustration, we reconsider BonJour’s “ravens” challenge (op. cit., p. 95f.). Finally, potential objections to our proposed formal coherence notion are explored. (shrink)
Cerreia-Vioglio et al. :341–375, 2011) have proposed a very general axiomatisation of preferences in the presence of ambiguity, viz. Monotonic Bernoullian Archimedean preference orderings. This paper investigates the problem of Arrovian aggregation of such preferences—and proves dictatorial impossibility results for both finite and infinite populations. Applications for the special case of aggregating expected-utility preferences are given. A novel proof methodology for special aggregation problems, based on model theory, is employed.