A search is under way for a theory that can accommodate our intuitions in population axiology. The object of this search has proved elusive. This is not surprising since, as we shall see, any welfarist axiology that satisfies three reasonable conditions implies at least one of three counter-intuitive conclusions. I shall start by pointing out the failures in three recent attempts to construct an acceptable population axiology. I shall then present an impossibilitytheorem and conclude with a short (...) discussion of how it might be extended to pluralist axiologies, that is, axiologies that take more values than welfare into account. (shrink)
Amalgamating evidence of different kinds for the same hypothesis into an overall confirmation is analogous, I argue, to amalgamating individuals’ preferences into a group preference. The latter faces well-known impossibility theorems, most famously “Arrow’s Theorem”. Once the analogy between amalgamating evidence and amalgamating preferences is tight, it is obvious that amalgamating evidence might face a theorem similar to Arrow’s. I prove that this is so, and end by discussing the plausibility of the axioms required for the (...) class='Hi'>theorem. (shrink)
In this paper, we show that Arrow’s well-known impossibilitytheorem is instrumental in bringing the ongoing discussion about verisimilitude to a more general level of abstraction. After some preparatory technical steps, we show that Arrow’s requirements for voting procedures in social choice are also natural desiderata for a general verisimilitude definition that places content and likeness considerations on the same footing. Our main result states that no qualitative unifying procedure of a functional form can simultaneously satisfy the requirements (...) of Unanimity, Independence of irrelevant alternatives and Non-dictatorship at the level of sentence variables. By giving a formal account of the incompatibility of the considerations of content and likeness, our impossibility result makes it possible to systematize the discussion about verisimilitude, and to understand it in more general terms. (shrink)
A paradox of self-reference in beliefs in games is identified, which yields a game-theoretic impossibilitytheorem akin to Russell’s Paradox. An informal version of the paradox is that the following configuration of beliefs is impossible:Ann believes that Bob assumes that.
Zwart and Franssen’s impossibilitytheorem reveals a conflict between the possible-world-based content-definition and the possible-world-based likeness-definition of verisimilitude. In Sect. 2 we show that the possible-world-based content-definition violates four basic intuitions of Popper’s consequence-based content-account to verisimilitude, and therefore cannot be said to be in the spirit of Popper’s account, although this is the opinion of some prominent authors. In Sect. 3 we argue that in consequence-accounts , content-aspects and likeness-aspects of verisimilitude are not in conflict with each (...) other, but in agreement . We explain this fact by pointing towards the deep difference between possible-world- and the consequence-accounts, which does not lie in the difference between syntactic (object-language) versus semantic (meta-language) formulations, but in the difference between ‘disjunction-of-possible-worlds’ versus ‘conjunction-of-parts’ representations of theories. Drawing on earlier work, we explain in Sect. 4 how the shortcomings of Popper’s original definition can be repaired by what we call the relevant element approach. We propose a quantitative likeness-definition of verisimilitude based on relevant elements which provably agrees with the qualitative relevant content-definition of verisimilitude on all pairs of comparable theories. We conclude the paper with a plea for consequence-accounts and a brief analysis of the problem of language-dependence (Sect. 6). (shrink)
Zwart and Franssen’s impossibilitytheorem reveals a conflict between the possible-world-based content-definition and the possible-world-based likeness-definition of verisimilitude. In Sect. 2 we show that the possible-world-based content-definition violates four basic intuitions of Popper’s consequence-based content-account to verisimilitude, and therefore cannot be said to be in the spirit of Popper’s account, although this is the opinion of some prominent authors. In Sect. 3 we argue that in consequence-accounts, content-aspects and likeness-aspects of verisimilitude are not in conflict with each other, (...) but in agreement. We explain this fact by pointing towards the deep difference between possible-world- and the consequence-accounts, which does not lie in the difference between syntactic versus semantic formulations, but in the difference between ‘disjunction-of-possible-worlds’ versus ‘conjunction-of-parts’ representations of theories. Drawing on earlier work, we explain in Sect. 4 how the shortcomings of Popper’s original definition can be repaired by what we call the relevant element approach. We propose a quantitative likeness-definition of verisimilitude based on relevant elements which provably agrees with the qualitative relevant content-definition of verisimilitude on all pairs of comparable theories. We conclude the paper with a plea for consequence-accounts and a brief analysis of the problem of language-dependence. (shrink)
Population axiology concerns how to evaluate populations in regard to their goodness, that is, how to order populations by the relations “is better than” and “is as good as”. This field has been riddled with impossibility results which seem to show that our considered beliefs are inconsistent in cases where the number of people and their welfare varies.1 All of these results have one thing in common, however. They all involve an adequacy condition that rules out Derek Parfit’s Repugnant (...) Conclusion: The Repugnant Conclusion: For any perfectly equal population with very high positive welfare, there is a population with very low positive welfare which is better, other things being equal.2 1 The informal Mere Addition Paradox in Parfit (1984), pp. 419ff is the locus classicus. For an informal proof of a similar result with stronger assumptions, see Ng (1989), p. 240. A formal proof with slightly stronger assumptions than Ng’s can be found in Blackorby and Donaldson (1991). For theorems with much weaker assumptions, see my (1999), (2000b), and especially (2000a), (2001), and (2009). 2 See Parfit (1984), p. 388. My formulation is more general than Parfit’s apart from that he doesn’t demand that the people with very high welfare are equally well off. Expressions such as “a population with very high positive welfare”, “a population with very low positive welfare”, etc., are elliptical for the more cumbersome phrases “a population consisting only of lives with.. (shrink)
Two recent papers (Cubitt and Sugden, 1994; Samuelson, 1992) have established impossibility results which cast doubt on the coherence of the assumption of âcommon knowledge of rationality'. It is shown that the CubittâSugden result is the more powerful of the two impossibilities. Second, it is proved that the existence of a quasi-strict equilibrium is sufficient to construct sets which satisfy the CubittâSugden axioms. This fact is used to establish that their impossibility result cannot arise in 2-player games. Finally, (...) it is shown that if a weak symmetry postulate is added, a new impossibility result arises for this class of games. (shrink)
Among the many sorts of problems encountered in decision theory, allocation problems occupy a central position. Such problems call for the assignment of a nonnegative real number to each member of a finite set of entities, in such a way that the values so assigned sum to some fixed positive real number s. Familiar cases include the problem of specifying a probability mass function on a countable set of possible states of the world, and the distribution of a certain sum (...) of money, or other resource, among various enterprises. In determining an s-allocation it is common to solicit the opinions of more than one individual, which leads immediately to the question of how to aggregate their typically differing allocations into a single “consensual” allocation. Guided by the traditions of social choice theory decision theorists have taken an axiomatic approach to determining acceptable methods of allocation aggregation. In such approaches so-called “independence” conditions have been ubiquitous. Such conditions dictate that the consensual allocation assigned to each entity should depend only on the allocations assigned by individuals to that entity, taking no account of the allocations that they assign to any other entities. While there are reasons beyond mere simplicity for subjecting allocation aggregation to independence, this radically anti-holistic stricture has frequently proved to severely limit the set of acceptable aggregation methods. As we show in what follows, the limitations are particularly acute in the case of three or more entities which must be assigned nonnegative values summing to some fixed positive number s. For if the set V⊆[0,s] of values that may be assigned to these entities satisfies some simple closure conditions and Vis finite, then independence allows only for dictatorial or imposed aggregation. This theorem builds on and extends a theorem of Bradley and Wagner and, when V={0,1}, yields as a corollary an impossibilitytheorem of Dietrich on judgment aggregation. (shrink)
Gärdenfors' impossibilitytheorem draws attention to certain formal difficulties in defining a conditional connective from a notion of theory revision, via the Ramsey test. We show that these difficulties are not avoided by taking the background inference operation to be non-monotonic.
This paper critically engages Philip Mirowki's essay, "The scientific dimensions of social knowledge and their distant echoes in 20th-century American philosophy of science." It argues that although the cold war context of anti-democratic elitism best suited for making decisions about engaging in nuclear war may seem to be politically and ideologically motivated, in fact we need to carefully consider the arguments underlying the new rational choice based political philosophies of the post-WWII era typified by Arrow's impossibilitytheorem. A (...) distrust of democratic decision-making principles may be developed by social scientists whose leanings may be toward the left or right side of the spectrum of political practices. (shrink)
It is a widespread intuition that the coherence of independent reports provides a powerful reason to believe that the reports are true. Formal results by Huemer, M. 1997. “Probability and Coherence Justification.” Southern Journal of Philosophy 35: 463–72, Olsson, E. 2002. “What is the Problem of Coherence and Truth?” Journal of Philosophy XCIX : 246–72, Olsson, E. 2005. Against Coherence: Truth, Probability, and Justification. Oxford University Press., Bovens, L., and S. Hartmann. 2003. Bayesian Epistemology. Oxford University Press, prove that, under (...) certain conditions, coherence cannot increase the probability of the target claim. These formal results, known as ‘the impossibility theorems’ have been widely discussed in the literature. They are taken to have significant epistemic upshot. In particular, they are taken to show that reports must first individually confirm the target claim before the coherence of multiple reports offers any positive confirmation. In this paper, I dispute this epistemic interpretation. The impossibility theorems are consistent with the idea that the coherence of independent reports provides a powerful reason to believe that the reports are true even if the reports do not individually confirm prior to coherence. Once we see that the formal discoveries do not have this implication, we can recover a model of coherence justification consistent with Bayesianism and these results. This paper, thus, seeks to turn the tide of the negative findings for coherence reasoning by defending coherence as a unique source of confirmation. (shrink)
According to conciliatory views about the epistemology of disagreement, when epistemic peers have conflicting doxastic attitudes toward a proposition and fully disclose to one another the reasons for their attitudes toward that proposition (and neither has independent reason to believe the other to be mistaken), each peer should always change his attitude toward that proposition to one that is closer to the attitudes of those peers with which there is disagreement. According to pure higher-order evidence views, higher-order evidence for a (...) proposition always suffices to determine the proper rational response to disagreement about that proposition within a group of epistemic peers. Using an analogue of Arrow's ImpossibilityTheorem, I shall argue that no conciliatory and pure higher-order evidence view about the epistemology of disagreement can provide a true and general answer to the question of what disagreeing epistemic peers should do after fully disclosing to each other the (first-order) reasons for their conflicting doxastic attitudes. (shrink)
Agents are often assumed to have degrees of belief (“credences”) and also binary beliefs (“beliefs simpliciter”). How are these related to each other? A much-discussed answer asserts that it is rational to believe a proposition if and only if one has a high enough degree of belief in it. But this answer runs into the “lottery paradox”: the set of believed propositions may violate the key rationality conditions of consistency and deductive closure. In earlier work, we showed that this problem (...) generalizes: there exists no local function from degrees of belief to binary beliefs that satisfies some minimal conditions of rationality and non-triviality. “Locality” means that the binary belief in each proposition depends only on the degree of belief in that proposition, not on the degrees of belief in others. One might think that the impossibility can be avoided by dropping the assumption that binary beliefs are a function of degrees of belief. We prove that, even if we drop the “functionality” restriction, there still exists no local relation between degrees of belief and binary beliefs that satisfies some minimal conditions. Thus functionality is not the source of the impossibility; its source is the condition of locality. If there is any non-trivial relation between degrees of belief and binary beliefs at all, it must be a “holistic” one. We explore several concrete forms this “holistic” relation could take. (shrink)
In 1951, Kenneth Arrow published his now celebrated book Social Choice and Individual Values. Although not the first book to be written on social choice, Arrow's work ushered in a voluminous literature mostly produced by economists but by philosophers and political scientists as well. Arrow's chief result was a proof of the impossibility of a social welfare function . He showed that there could be no decision procedure for aggregating individual preference orderings into a grand, overall social preference ordering. (...) The result has been hailed by some as a sort of Godel Theorem of economics. It has seemed to many to have, if not the complexity of the Godel Theorem, at least the same astonishing counter-intuitiveness. On the other hand, some social choice theorists, while conceding the validity of the Arrow Theorem, have challenged its soundness by quarreling with one or more of its presuppositions. (shrink)
Metalogic is an open-ended cognitive, formal methodology pertaining to semantics and information processing. The language that mathematizes metalogic is known as metalanguage and deals with metafunctions purely by extension on patterns. A metalogical process involves an effective enrichment in knowledge as logical statements, and, since human cognition is an inherently logic–based representation of knowledge, a metalogical process will always be aimed at developing the scope of cognition by exploring possible cognitive implications reflected on successive levels of abstraction. Indeed, it is (...) basically impracticable to maintain logic–and–metalogic without paying heed to cognitive theorizing. For it is cognitively irrelevant whether possible conclusions are deduced from some premises before the premises are determined to be true or whether the premises themselves are determined to be true first and, then, the conclusions are deduced from them. In this paper we consider the term metalogic as inherently embodied under the framework referred to as cognitive science and mathematics. We propose a metalogical interpretation of Arrow’s impossibilitytheorem and, to that end, choice theory is understood as a mental course of action dealing with logic and metalogic issues, in which a possible mathematical approach to model a mental course of action is adopted as a systematic operating method. Nevertheless, if we look closely at the core of Arrow’s impossibilitytheorem in terms of metalogic, a second fundamental contribution to this framework is represented by the Nash equilibrium. As a result of the foregoing, therefore, we prove the metalogical equivalence between Arrow’s impossibilitytheorem and the existence of the Nash equilibrium. More specifically, Arrow’s requirements correspond to the Nash equilibrium for finite mixed strategies with no symmetry conditions. To demonstrate this proof, we first verify that Arrow’s set and Nash’s set are isomorphic to each other, both sets being under stated conditions of non–symmetry. Then, the proof is completed by virtue of category theory. Indeed, the two sets are categories that correspond biuniquely to one another and, thus, it is possible to define a covariant functor that preserves their mutual structures. According to this, we show the proof–dedicated metalanguage as a precursor to a special equivalence theorem. (shrink)
Arrow’s impossibility result not only had a profound influence on welfare economics, but was, as this paper shows, also widely discussed in philosophy of science and in the engineering design literature.
We characterize seniority rules, also known as lexical dictatorships, under weak consistency constraints on the group’s choice function. These constraints are base triple-acyclicity in the case of binary choices and rationalizability (although not rationality) in the case of choices between an arbitrary number of alternatives. Existing results on these weakened constraints remain silent on the treatment of the group’s most junior individuals and therefore do not yield a complete characterization of seniority rules. We also impose a universal domain, binary strict (...) Pareto optimality, binary Pareto indifference, binary independence of irrelevant alternatives, and the newly introduced condition of conflict resolution. The latter condition requires a social choice rules not to remain indecisive between alternatives for which individuals have conflicting preferences. (shrink)
In preference aggregation a set of individuals express preferences over a set of alternatives, and these preferences have to be aggregated into a collective preference. When preferences are represented as orders, aggregation procedures are called social welfare functions. Classical results in social choice theory state that it is impossible to aggregate the preferences of a set of individuals under different natural sets of axiomatic conditions. We define a first-order language for social welfare functions and we give a complete axiomatisation for (...) this class, without having the number of individuals or alternatives specified in the language. We are able to express classical axiomatic requirements in our first-order language, giving formal axioms for three classical theorems of preference aggregation by Arrow, by Sen, and by Kirman and Sondermann. We explore to what extent such theorems can be formally derived from our axiomatisations, obtaining positive results for Sen’s Theorem and the Kirman-Sondermann Theorem. For the case of Arrow’s Theorem, which does not apply in the case of infinite societies, we have to resort to fixing the number of individuals with an additional axiom. In the long run, we hope that our approach to formalisation can serve as the basis for a fully automated proof of classical and new theorems in social choice theory. (shrink)
Separates the purely combinatorial component of Arrow's impossibilitytheorem in the theory of collective preference from its decision-theoretic part, and likewise for the closely related Blair/Bordes/Kelly/Suzumura theorem. Such a separation provides a particularly elegant proof of Arrow's result, via a new 'splitting theorem'.
The purpose of this article is to introduce a Cartesian product structure into the social choice theoretical framework and to examine if new possibility results to Gibbard’s and Sen’s paradoxes can be developed thanks to it. We believe that a Cartesian product structure is a pertinent way to describe individual rights in the social choice theory since it discriminates the personal features comprised in each social state. First we define some conceptual and formal tools related to the Cartesian product structure. (...) We then apply these notions to Gibbard’s paradox and to Sen’s impossibility of a Paretian liberal. Finally we compare the advantages of our approach to other solutions proposed in the literature for both impossibility theorems. (shrink)
In this paper I prove a theorem which is similar to Arrow's famous impossibilitytheorem. I show that no social welfare function can be both minimally majoritarian and also independent of irrelevant alternatives. My condition of minimal majoritarianism is substantially weaker than simple majority rule.
Various proponents of Cultural Theory have claimed that CT's ImpossibilityTheorem, namely that there are precisely five viable ways of life, has been formally proved. In this paper, I show that the ImpossibilityTheorem has not been formally proved and present a refutation of the ImpossibilityTheorem. With regard to, the problem areas identified include a failure to take into account the analogical nature of their theory and also a failure to carefully consider the (...) nature of the relationship between mathematical models and the empirical phenomena that they are supposed to model. With regard to, an empirically grounded description of a distinct, sixth viable way of life, here called the Philosophical way of life, is presented. Second, a general argument is presented that demonstrates the necessity of positing a sixth form of rationality and a sixth viable way of life in addition to the five rationalities and five ways of life recognized by CT. (shrink)
Cultural Theory is breathtaking in its comprehensiveness and in its simplicity. With regard to CT’s comprehensiveness, it is entirely characteristic that when the three authors of Cultural Theory get around to asking themselves “What does cultural theory leave out?”, their answer turns out to be a hearty “Not much!” In a single work, Michael Thompson manages to credit CT with shedding light on everything from environmental policies and Kondratiev waves, to Everest expeditions, the literary preferences of Benjamin Disraeli, and Aristotle’s (...) four causes. (shrink)
Arrhenius’s impossibility theorems purport to demonstrate that no population axiology can satisfy each of a small number of intuitively compelling adequacy conditions. However, it has recently been pointed out that each theorem depends on a dubious assumption: Finite Fine-Grainedness. This assumption states that there exists a finite sequence of slight welfare differences between any two welfare levels. Denying Finite Fine-Grainedness makes room for a lexical population axiology which satisfies all of the compelling adequacy conditions in each theorem. (...) Therefore, Arrhenius’s theorems fail to prove that there is no satisfactory population axiology. In this paper, I argue that Arrhenius’s theorems can be repurposed. Since all of our population-affecting actions have a non-zero probability of bringing about more than one distinct population, it is population prospect axiologies that are of practical relevance, and amended versions of Arrhenius’s theorems demonstrate that there is no satisfactory population prospect axiology. These impossibility theorems do not depend on Finite Fine-Grainedness, so lexical views do not escape them. (shrink)
The debate over the question whether quantum mechanics should be considered as a complete account of microphenomena has a long and deeply involved history, a turning point in which has been certainly the Einstein-Bohr debate, with the ensuing charge of incompleteness raised by the Einstein-Podolsky-Rosen argument. In quantum mechanics, physical systems can be prepared in pure states that nevertheless have in general positive dispersion for most physical quantities; hence in the EPR argument, the attention is focused on the question whether (...) the account of the microphysical phenomena provided by quantum mechanics is to be regarded as an exhaustive description of the physical reality to which those phenomena are supposed to refer, a question to which Einstein himself answered in the negative. However, there is a mathematical side of the completeness issue in quantum mechanics, namely the question whether the kind of states with positive dispersion can be represented as a different, dispersion-free kind of states in a way consistent with the mathematical constraints of the quantum mechanical formalism. From this point of view, the other source of the completeness issue in quantum mechanics is the no hidden variables theorem formulated by John von Neumann in his celebrated book on the mathematical foundations of quantum mechanics, the preface of which already anticipates the program and the conclusion concerning the possibility of ‘neutralizing’ the statistical character of quantum mechanics. (shrink)
In response to recent work on the aggregation of individual judgments on logically connected propositions into collective judgments, it is often asked whether judgment aggregation is a special case of Arrowian preference aggregation. We argue for the converse claim. After proving two impossibility theorems on judgment aggregation (using "systematicity" and "independence" conditions, respectively), we construct an embedding of preference aggregation into judgment aggregation and prove Arrow’s theorem (stated for strict preferences) as a corollary of our second result. Although (...) we thereby provide a new proof of Arrow’s theorem, our main aim is to identify the analogue of Arrow’s theorem in judgment aggregation, to clarify the relation between judgment and preference aggregation, and to illustrate the generality of the judgment aggregation model. JEL Classi…cation: D70, D71.. (shrink)
Standard impossibility theorems on judgment aggregation over logically connected propositions either use a controversial systematicity condition or apply only to agendas of propositions with rich logical connections. Are there any serious impossibilities without these restrictions? We prove an impossibilitytheorem without requiring systematicity that applies to most standard agendas: Every judgment aggregation function (with rational inputs and outputs) satisfying a condition called unbiasedness is dictatorial (or effectively dictatorial if we remove one of the agenda conditions). Our agenda (...) conditions are tight. When applied illustratively to (strict) preference aggregation represented in our model, the result implies that every unbiased social welfare function with universal domain is effectively dictatorial. (shrink)
According to a theorem recently proved in the theory of logical aggregation, any nonconstant social judgment function that satisfies independence of irrelevant alternatives (IIA) is dictatorial. We show that the strong and not very plausible IIA condition can be replaced with a minimal independence assumption plus a Pareto-like condition. This new version of the impossibilitytheorem likens it to Arrow’s and arguably enhances its paradoxical value.
John Barrow is increasingly recognized as one of our most elegant and accomplished science writers, a brilliant commentator on cosmology, mathematics, and modern physics. Barrow now tackles the heady topic of impossibility, in perhaps his strongest book yet. Writing with grace and insight, Barrow argues convincingly that there are limits to human discovery, that there are things that are ultimately unknowable, undoable, or unreachable. He first examines the limits on scientific inquiry imposed by the deficiencies of the human mind: (...) our brain evolved to meet the demands of our immediate environment, Barrow notes, and much that lies outside this small circle may also lie outside our understanding. Barrow investigates practical impossibilities, such as those imposed by complexity, uncomputability, or the finiteness of time, space, and resources. Is the universe finite or infinite? Can information be transmitted faster than the speed of light? The book also examines the deeper theoretical restrictions on our ability to know, including Godel's theorem--which proved that there were things that could not be proved--and Arrow's Impossibilitytheorem about democratic voting systems. Finally, having explored the limits imposed on us from without, Barrow considers whether there are limits we should impose upon ourselves. For instance, if the secrets of the atom are to be found only by recreating extreme environments at great financial cost, just how much should we devote to that quest? Weaving together this intriguing tapestry, he illuminates some of the most profound questions of science, from the possibility of time travel to the very structure of the universe. (shrink)
The aggregation of individual judgments over interrelated propositions is a newly arising field of social choice theory. I introduce several independence conditions on judgment aggregation rules, each of which protects against a specific type of manipulation by agenda setters or voters. I derive impossibility theorems whereby these independence conditions are incompatible with certain minimal requirements. Unlike earlier impossibility results, the main result here holds for any (non-trivial) agenda. However, independence conditions arguably undermine the logical structure of judgment aggregation. (...) I therefore suggest restricting independence to premises, which leads to a generalised premise-based procedure. This procedure is proven to be possible if the premises are logically independent. (shrink)
There is an extensive literature in social choice theory studying the consequences of weakening the assumptions of Arrow's ImpossibilityTheorem. Much of this literature suggests that there is no escape from Arrow-style impossibility theorems unless one drastically violates the Independence of Irrelevant Alternatives (IIA). In this paper, we present a more positive outlook. We propose a model of comparing candidates in elections, which we call the Advantage-Standard (AS) model. The requirement that a collective choice rule (CCR) be (...) rationalizable by the AS model is in the spirit of but weaker than IIA; yet it is stronger than what is known in the literature as weak IIA (two profiles alike on x, y cannot have opposite strict social preferences on x and y). In addition to motivating violations of IIA, the AS model makes intelligible violations of another Arrovian assumption: the negative transitivity of the strict social preference relation P. While previous literature shows that only weakening IIA to weak IIA or only weakening negative transitivity of P to acyclicity still leads to impossibility theorems, we show that jointly weakening IIA to AS rationalizability and weakening negative transitivity of P leads to no such impossibility theorems. Indeed, we show that several appealing CCRs are AS rationalizable, including even transitive CCRs. (shrink)
It is argued in this paper that amalgamating confirmation from various sources is relevantly different from social-choice contexts, and that proving an impossibilitytheorem for aggregating confirmation measures directs attention to irrelevant issues.
Riker (1982) famously argued that Arrow’s impossibilitytheorem undermined the logical foundations of “populism”, the view that in a democracy, laws and policies ought to express “the will of the people”. In response, his critics have questioned the use of Arrow’s theorem on the grounds that not all configurations of preferences are likely to occur in practice; the critics allege, in particular, that majority preference cycles, whose possibility the theorem exploits, rarely happen. In this essay, I (...) argue that the critics’ rejoinder to Riker misses the mark even if its factual claim about preferences is correct: Arrow’s theorem and related results threaten the populist’s principle of democratic legitimacy even if majority preference cycles never occur. In this particular context, the assumption of an unrestricted domain is justified irrespective of the preferences citizens are likely to have. (shrink)
Suppose that the members of a group each hold a rational set of judgments on some interconnected questions, and imagine that the group itself has to form a collective, rational set of judgments on those questions. How should it go about dealing with this task? We argue that the question raised is subject to a difficulty that has recently been noticed in discussion of the doctrinal paradox in jurisprudence. And we show that there is a general impossibilitytheorem (...) that that difficulty illustrates. Our paper describes this impossibility result and provides an exploration of its significance. The result naturally invites comparison with Kenneth Arrow's famous theorem (Arrow, 1963 and 1984; Sen, 1970) and we elaborate that comparison in a companion paper (List and Pettit, 2002). The paper is in four sections. The first section documents the need for various groups to aggregate its members' judgments; the second presents the discursive paradox; the third gives an informal statement of the more general impossibility result; the formal proof is presented in an appendix. The fourth section, finally, discusses some escape routes from that impossibility. (shrink)
Population axiology concerns how to evaluate populations in regard to their goodness, that is, how to order populations by the relations \is better than" and \is as good as". This eld has been riddled with para- doxes and impossibility results which seem to show that our considered beliefs are inconsistent in cases where the number of people and their welfare varies. All of these results have one thing in common, however. They all involve an adequacy condition that rules out (...) Derek Par t's Repugnant Conclusion. Moreover, some theorists have argued that we should accept the Repugnant Conclusion and hence that avoidance of this conclusion is not a convincing adequacy condition for a population axiology. As I shall show in this chapter, however, one can replace avoid- ance of the Repugnant Conclusion with a logically weaker and intuitively more convincing condition. The resulting theorem involves, to the best of my knowledge, logically weaker and intuitively more compelling con- ditions than the other theorems presented in the literature. As such, it challenges the very existence of a satisfactory population ethics. (shrink)
According to a popular narrative, in 1932 von Neumann introduced a theorem that intended to be a proof of the impossibility of hidden variables in quantum mechanics. However, the narrative goes, Bell later spotted a flaw that allegedly shows its irrelevance. Bell’s widely accepted criticism has been challenged by Bub and Dieks: they claim that the proof shows that viable hidden variables theories cannot be theories in Hilbert space. Bub’s and Dieks’ reassessment has been in turn challenged by (...) Mermin and Schack. Hereby I critically assess their reply, with the aim of bringing further clarification concerning the meaning, scope and relevance of von Neumann’s theorem. I show that despite Mermin and Schack’s response, Bub’s and Dieks’ reassessment is quite correct, and that this reading gets strongly reinforced when we carefully consider the connection between von Neumann’s proof and Gleason’s theorem. (shrink)
Any intermediate propositional logic (i.e., a logic including intuitionistic logic and contained in classical logic) can be extended to a calculus with epsilon- and tau-operators and critical formulas. For classical logic, this results in Hilbert’s ε-calculus. The first and second ε-theorems for classical logic establish conservativity of the ε-calculus over its classical base logic. It is well known that the second ε-theorem fails for the intuitionistic ε-calculus, as prenexation is impossible. The paper investigates the effect of adding critical ε- (...) and τ -formulas and using the translation of quantifiers into ε- and τ -terms to intermediate logics. It is shown that conservativity over the propositional base logic also holds for such intermediate ετ -calculi. The “extended” first ε-theorem holds if the base logic is finite-valued Gödel-Dummett logic, fails otherwise, but holds for certain provable formulas in infinite-valued Gödel logic. The second ε-theorem also holds for finite-valued first-order Gödel logics. The methods used to prove the extended first ε-theorem for infinite-valued Gödel logic suggest applications to theories of arithmetic. (shrink)
Population axiology concerns how to evaluate populations in regard to their goodness, that is, how to order populations by the relations “is better than ” and “is as good as”. This field has been riddled with paradoxes and impossibility results which seem to show that our considered beliefs are inconsistent in cases where the number of people and their welfare varies. All of these results have one thing in common, however. They all involve an adequacy condition that rules out (...) Derek Parfit’s Repugnant Conclusion. Moreover, some theorists have argued that we should accept the Repugnant Conclusion and hence that avoidance of this conclusion is not a convincing adequacy condition for a population axiology. As I shall show in this chapter, however, one can replace avoidance of the Repugnant Conclusion with a logically weaker and intuitively more convincing condition. The resulting theorem involves, to the best of my knowledge, logically weaker and intuitively more compelling conditions than the other theorems presented in the literature. As such, it. (shrink)
Kenneth Arrow’s “impossibility” theorem—or “general possibility” theorem, as he called it—answers a very basic question in the theory of collective decision-making. Say there are some alternatives to choose among. They could be policies, public projects, candidates in an election, distributions of income and labour requirements among the members of a society, or just about anything else. There are some people whose preferences will inform this choice, and the question is: which procedures are there for deriving, from what (...) is known or can be found out about their preferences, a collective or “social” ordering of the alternatives from better to worse? The answer is startling. Arrow’s theorem says there are no such procedures whatsoever—none, anyway, that satisfy certain apparently quite reasonable assumptions concerning the autonomy of the people and the rationality of their preferences. The technical framework in which Arrow gave the question of social orderings a precise sense and its rigorous answer is now widely used for studying problems in welfare economics. The impossibilitytheorem itself set much of the agenda for contemporary social choice theory. Arrow accomplished this while still a graduate student. In 1972, he received the Nobel Prize in economics for his contributions. (shrink)
This paper concerns voting with logical consequences, which means that anybody voting for an alternative x should vote for the logical consequences of x as well. Similarly, the social choice set is also supposed to be closed under logical consequences. The central result of the paper is that, given a set of fairly natural conditions, the only social choice functions that satisfy social logical closure are oligarchic (where a subset of the voters are decisive for the social choice). The set (...) of conditions needed for the proof include a version of Independence of Irrelevant Alternatives that also plays a central role in Arrow's impossibilitytheorem. (Published Online July 11 2006) Footnotes1 Much of this article was written while the author was a fellow at the Swedish Collegium for Advanced Study in the Social Sciences (SCASSS) in Uppsala. I want to thank the Collegium for providing me with excellent working conditions. Wlodek Rabinowicz and other fellows gave me valuable comments at a seminar at SCASSS when an early version of the paper was presented. I also want to thank Luc Bovens, Franz Dietrich, Christian List and an anonymous referee for their excellent comments on a later version. The final version was prepared during a stay at Oxford University for which I am grateful to the British Academy. (shrink)