There is an extensive literature in social choice theory studying the consequences of weakening the assumptions of Arrow's Impossibility Theorem. Much of this literature suggests that there is no escape from Arrow-style impossibility theorems unless one drastically violates the Independence of Irrelevant Alternatives (IIA). In this paper, we present a more positive outlook. We propose a model of comparing candidates in elections, which we call the Advantage-Standard (AS) model. The requirement that a collective choice rule (CCR) be rationalizable by the (...) AS model is in the spirit of but weaker than IIA; yet it is stronger than what is known in the literature as weak IIA (two profiles alike on x, y cannot have opposite strict social preferences on x and y). In addition to motivating violations of IIA, the AS model makes intelligible violations of another Arrovian assumption: the negative transitivity of the strict social preference relation P. While previous literature shows that only weakening IIA to weak IIA or only weakening negative transitivity of P to acyclicity still leads to impossibility theorems, we show that jointly weakening IIA to AS rationalizability and weakening negative transitivity of P leads to no such impossibility theorems. Indeed, we show that several appealing CCRs are AS rationalizable, including even transitive CCRs. (shrink)
Extensive measurement is the standard measurement-theoretic approach for constructing a ratio scale. It involves the comparison of objects that can be "concatenated" in an additively representable way. This paper studies the implications of extensively measurable welfare for social choice theory. We do this in two frameworks: an Arrovian framework with a fixed population and no interpersonal comparisons, and a generalized framework with variable populations and full interpersonal comparability. In each framework we use extensive measurement to introduce novel domain restrictions, independence (...) conditions, and constraints on social evaluation. We prove a welfarism theorem for the resulting domains and characterize the social welfare functions that satisfy the axioms of extensive measurement at both the individual and social levels. The main results are simple axiomatizations of strong dictatorship in the Arrovian framework and classical utilitarianism in the generalized framework. We conclude by drawing some lessons regarding the utilitarian significance of Harsanyi's aggregation theorem. (shrink)
Este artículo brinda algunas respuestas y alternativas a ciertos problemas y propuestas en el área de la teoría democrática. El ensayo tiene como enfoque la cuestión de distinguir sistemas que pueden parecer democráticos sin serlo de sistemas realmente democráticos. Develando algunos actores disfrazados del discurso democrático en América Latina, el artículo argumenta que es preferible la regla de la mayoría como base para la identificación del bien común por medio del interés general, que reglas de minorías, consentimiento total o bases (...) constitucionales opuestas al gobierno de la mayoría. Esto se hace sin desestimar a las minorías ni las constituciones, ni lo ideal de la unanimidad. Simplemente las ubica. La investigación incluye una nueva respuesta al problema de la supuesta irracionalidad de la democracia identificada originalmente por Platón y después formalizada por Arrow. (shrink)
Due to the imperfect ability of individuals to discriminate between sufficiently similar alternatives, individual indifferences may fail to be transitive. I prove two impossibility theorems for social choice under indifference intransitivity, using axioms that are strictly weaker than Strong Pareto and that have been endorsed (sometimes jointly) in prior work on social choice under indifference intransitivity. The key axiom is Consistency, which states that if bundles are held constant for all but one individual, then society’s preferences must align with those (...) of that individual. Theorem 1 combines Consistency with Indifference Agglomeration, which states that society must be indifferent to combined changes in the bundles of two individuals if it is indifferent to the same changes happening to each individual separately. Theorem 2 combines Consistency with Weak Majority Preference, which states that society must prefer whatever the majority prefers if no one has a preference to the contrary. Given that indifference intransitivity is a necessary condition for the just-noticeable difference (JND) approach to interpersonal utility comparisons, a key implication of the theorems is that any attempt use the JND approach to derive societal preferences must violate at least one of these three axioms. (shrink)
Riker (1982) famously argued that Arrow’s impossibility theorem undermined the logical foundations of “populism”, the view that in a democracy, laws and policies ought to express “the will of the people”. In response, his critics have questioned the use of Arrow’s theorem on the grounds that not all configurations of preferences are likely to occur in practice; the critics allege, in particular, that majority preference cycles, whose possibility the theorem exploits, rarely happen. In this essay, I argue that the critics’ (...) rejoinder to Riker misses the mark even if its factual claim about preferences is correct: Arrow’s theorem and related results threaten the populist’s principle of democratic legitimacy even if majority preference cycles never occur. In this particular context, the assumption of an unrestricted domain is justified irrespective of the preferences citizens are likely to have. (shrink)
This paper develops and explores a new framework for theorizing about the measurement and aggregation of well-being. It is a qualitative variation on the framework of social welfare functionals developed by Amartya Sen. In Sen’s framework, a social or overall betterness ordering is assigned to each profile of real-valued utility functions. In the qualitative framework developed here, numerical utilities are replaced by the properties they are supposed to represent. This makes it possible to characterize the measurability and interpersonal comparability of (...) well-being directly, without the use of invariance conditions, and to distinguish between real changes in well-being and merely representational changes in the unit of measurement. The qualitative framework is shown to have important implications for a range of issues in axiology and social choice theory, including the characterization of welfarism, axiomatic derivations of utilitarianism, the meaningfulness of prioritarianism, the informational requirements of variable-population ethics, the impossibility theorems of Arrow and others, and the metaphysics of value. (shrink)
Tsui and Weymark (Economic Theory, 1997) have shown that the only continuous social welfare orderings on the whole Euclidean space which satisfy the weak Pareto principle and are invariant to individual-specific similarity transformations of utilities are strongly dictatorial. Their proof relies on functional equation arguments which are quite complex. This note provides a simpler proof of their theorem.
Are interpersonal comparisons of desire possible? Can we give an account of how facts about desires are grounded, that underpins such comparisons? This paper supposes the answer to the first question is yes, and provides an account of the nature of desire that explains how this is so. The account is a modification of the interpretationist metaphysics of representation that the author has recently been developing. The modification is to allow phenomenological affective valence into the “base facts” on which correct (...) interpretation is grounded. To use this extra resource within that theory to vindicate interpersonal comparisons, we will need to appeal rational connections between level of valence and level of desire, which this paper sets out and examines. (shrink)
Peer reviewers at many funding agencies and scientific journals are asked to score submissions both on individual criteria and overall. The overall scores should be some kind of aggregate of the criteria scores. Carole Lee identifies this as a potential locus for bias to enter the peer review process, which she calls commensuration bias. Here I view the aggregation of scores through the lens of social choice theory. I argue that, when reviewing grant proposals, it is in many cases impossible (...) to avoid commensuration bias. (shrink)
There is a long tradition of fruitful interaction between logic and social choice theory. In recent years, much of this interaction has focused on computer-aided methods such as SAT solving and interactive theorem proving. In this paper, we report on the development of a framework for formalizing voting theory in the Lean theorem prover, which we have applied to verify properties of a recently studied voting method. While previous applications of interactive theorem proving to social choice have focused on the (...) verification of impossibility theorems, we aim to cover a variety of results ranging from impossibility theorems to the verification of properties of specific voting methods. In order to formalize voting theoretic axioms concerning adding or removing candidates and voters, we work in a variable-election setting whose formalization makes use of dependent types in Lean. (shrink)
We propose six axioms concerning when one candidate should defeat another in a democratic election involving two or more candidates. Five of the axioms are widely satisfied by known voting procedures. The sixth axiom is a weakening of Kenneth Arrow's famous condition of the Independence of Irrelevant Alternatives (IIA). We call this weakening Coherent IIA. We prove that the five axioms plus Coherent IIA single out a method of determining defeats studied in our recent work: Split Cycle. In particular, Split (...) Cycle provides the most resolute definition of defeat among any satisfying the six axioms for democratic defeat. In addition, we analyze how Split Cycle escapes Arrow's Impossibility Theorem and related impossibility results. (shrink)
The article develops an internalist justification of welfare ethics based on empathy. It takes up Hume’s and Schopenhauer’s internalistic (but not consistently developed) justification approach via empathy, but tries to solve three of their problems: 1. the varying strength of empathy depending on the proximity to the object of empathy, 2. the unclear metaethical foundation, 3. the absence of a quantitative model of empathy strength. 1. As a solution to the first problem, the article proposes to limit the foundation of (...) welfare ethics to certain types of empathy. 2. In response to the second problem, an internalistic metaethical conception of the justification of moral principles is outlined, the result of which is: The moral value of the well-being of persons is identical to the expected extent of (positive and negative) empathy arising from this well-being. 3. The contribution to the solution of the third problem and focus of the article is an empirical model of the (subject’s) expected extent of empathy depending on (an object’s) well-being. According to this model, the extent of empathy is not proportional to the expected empathy, but follows a concave function and is therefore prioritarian. Accordingly, the article provides a sketch of an internalist justification of prioritarianism. (shrink)
In this paper, first the term 'prioritarianism' is defined, with some mathematical precision, on the basis of intuitive conceptions of prioritarianism, especially the idea that "benefiting people matters more the worse off these people are". (The prioritarian weighting function is monotonously ascending and concave, while its first derivation is smoothly descending and convex but positive throughout.) Furthermore, (moderate welfare) egalitarianism is characterized. In particular a new symmetry condition is defended, i.e. that egalitarianism evaluates upper and lower deviations from the social (...) middle symmetrically and equally negatively (as do e.g. variance and Gini). Finally, it is shown that this feature distinguishes egalitarianism also extensionally from prioritarianism. (shrink)
In Arrovian social choice theory assuming the independence of irrelevant alternatives, Murakami (1968) proved two theorems about complete and transitive collective choice rules that satisfy strict non-imposition (citizens’ sovereignty), one being a dichotomy theorem about Paretian or anti-Paretian rules and the other a dictator-or-inverse-dictator impossibility theorem without the Pareto principle. It has been claimed in the later literature that a theorem of Malawski and Zhou (1994) is a generalization of Murakami’s dichotomy theorem and that Wilson’s (1972) impossibility theorem is stronger (...) than Murakami’s impossibility theorem, both by virtue of replacing Murakami’s assumption of strict non-imposition with the assumptions of non-imposition and non-nullness. In this note, we first point out that these claims are incorrect: non-imposition and non-nullness are together equivalent to strict non-imposition for all transitive collective choice rules. We then generalize Murakami’s dichotomy and impossibility theorems to the setting of incomplete social preference. We prove that if one drops completeness from Murakami’s assumptions, his remaining assumptions imply (i) that a collective choice rule is either Paretian, anti-Paretian, or dis-Paretian (unanimous individual preference implies noncomparability) and (ii) that adding proposed constraints on noncomparability, such as the regularity axiom of Eliaz and Ok (2006), restores Murakami’s dictator-or-inverse-dictator result. (shrink)
In his classic monograph, Social Choice and Individual Values, Arrow introduced the notion of a decisive coalition of voters as part of his mathematical framework for social choice theory. The subsequent literature on Arrow’s Impossibility Theorem has shown the importance for social choice theory of reasoning about coalitions of voters with different grades of decisiveness. The goal of this paper is a fine-grained analysis of reasoning about decisive coalitions, formalizing how the concept of a decisive coalition gives rise to a (...) social choice theoretic language and logic all of its own. We show that given Arrow’s axioms of the Independence of Irrelevant Alternatives and Universal Domain, rationality postulates for social preference correspond to strong axioms about decisive coalitions. We demonstrate this correspondence with results of a kind familiar in economics—representation theorems—as well as results of a kind coming from mathematical logic—completeness theorems. We present a complete logic for reasoning about decisive coalitions, along with formal proofs of Arrow’s and Wilson’s theorems. In addition, we prove the correctness of an algorithm for calculating, given any social rationality postulate of a certain form in the language of binary preference, the corresponding axiom in the language of decisive coalitions. These results suggest for social choice theory new perspectives and tools from logic. (shrink)
Panels, boards, and committees throughout society evaluate all manner of things by grading them, first individually and then collectively. Thus risks are prioritized, research proposals are funded, and candidates are shortlisted for jobs. It is not usual to pick winners in political elections by grading the candidates, but there are examples in history. This article takes up a question about the quality of judgments and decisions made by grading: under which conditions are they likely to be right? An answer comes (...) in the form of a jury theorem for median grading. Here, the collective grade for a thing is the median of its individually assigned grades—the one in the middle, when all of them are listed from "top" to "bottom." A second objective of this article is to suggest a solution to problems of voter ignorance in democracies. The idea is for democratic assemblies to use voting methods that make more of people's limited knowledge than do commonly used methods, such as majority voting. It turns out that in theory anyway, and perhaps also in practice, median grading can enable unenlightened assemblies to “track the truth”—even as majority voting would run them off the rails. (shrink)
Legg and Hutter, as well as subsequent authors, considered intelligent agents through the lens of interaction with reward-giving environments, attempting to assign numeric intelligence measures to such agents, with the guiding principle that a more intelligent agent should gain higher rewards from environments in some aggregate sense. In this paper, we consider a related question: rather than measure numeric intelligence of one Legg- Hutter agent, how can we compare the relative intelligence of two Legg-Hutter agents? We propose an elegant answer (...) based on the following insight: we can view Legg-Hutter agents as candidates in an election, whose voters are environments, letting each environment vote (via its rewards) which agent (if either) is more intelligent. This leads to an abstract family of comparators simple enough that we can prove some structural theorems about them. It is an open question whether these structural theorems apply to more practical intelligence measures. (shrink)
Much of the theoretical work on strategic voting makes strong assumptions about what voters know about the voting situation. A strategizing voter is typically assumed to know how other voters will vote and to know the rules of the voting method. A growing body of literature explores strategic voting when there is uncertainty about how others will vote. In this paper, we study strategic voting when there is uncertainty about the voting method. We introduce three notions of manipulability for a (...) set of voting methods: sure, safe, and expected manipulability. With the help of a computer program, we identify voting scenarios in which uncertainty about the voting method may reduce or even eliminate a voter's incentive to misrepresent her preferences. Thus, it may be in the interest of an election designer who wishes to reduce strategic voting to leave voters uncertain about which of several reasonable voting methods will be used to determine the winners of an election. (shrink)
What is the relationship between degrees of belief and binary beliefs? Can the latter be expressed as a function of the former—a so-called “belief-binarization rule”—without running into difficulties such as the lottery paradox? We show that this problem can be usefully analyzed from the perspective of judgment-aggregation theory. Although some formal similarities between belief binarization and judgment aggregation have been noted before, the connection between the two problems has not yet been studied in full generality. In this paper, we seek (...) to fill this gap. The paper is organized around a baseline impossibility theorem, which we use to map out the space of possible solutions to the belief-binarization problem. Our theorem shows that, except in limiting cases, there exists no belief-binarization rule satisfying four initially plausible desiderata. Surprisingly, this result is a direct corollary of the judgment-aggregation variant of Arrow’s classic impossibility theorem in social choice theory. (shrink)
In a democracy, citizens should have some control over how they are governed. If they do not participate directly in making policy, they ought to maintain control over the public officials who design policy on their behalf. Rule by Multiple Majorities develops a novel theory of popular control: an account of what it is, why democracy's promise of popular control is compatible with what we know about actual democracies, and why it matters. While social choice theory suggests there is no (...) such thing as a 'popular will' in societies with at least minimal diversity of opinion, the author argues that multiple, overlapping majorities can nonetheless have control, at the same time. After resolving this conceptual puzzle, the author explains why popular control is a realistic and compelling ideal for democracies, notwithstanding voters' low levels of information and other shortcomings. (shrink)
In normative political theory, it is widely accepted that democracy cannot be reduced to voting alone, but that it requires deliberation. In formal social choice theory, by contrast, the study of democracy has focused primarily on the aggregation of individual opinions into collective decisions, typically through voting. While the literature on deliberation has an optimistic flavour, the literature on social choice is more mixed. It is centred around several paradoxes and impossibility results identifying conflicts between different intuitively plausible desiderata. In (...) recent years, there has been a growing dialogue between the two literatures. This paper discusses the connections between them. Important insights are that (i) deliberation can complement aggregation and open up an escape route from some of its negative results; and (ii) the formal models of social choice theory can shed light on some aspects of deliberation, such as the nature of deliberation-induced opinion change. (shrink)
A computer simulation is used to study collective judgements that an expert panel reaches on the basis of qualitative probability judgements contributed by individual members. The simulated panel displays a strong and robust crowd wisdom effect. The panel's performance is better when members contribute precise probability estimates instead of qualitative judgements, but not by much. Surprisingly, it doesn't always hurt for panel members to interpret the probability expressions differently. Indeed, coordinating their understandings can be much worse.
This paper describes an unknown episode in the development of the theory of social choice. In the Summer 1949, while at RAND, Quine worked on Arrow’s (im)possibility theorem. This work was eventually published as a paper on (applied) set theory totally disconnected from social choice. The working paper directly linked to Arrow’s work was never published. I alluded to this (then unwritten) paper in a number of presentations I made on ‘Logic and Social Choice’ in Turku, Bucharest, Boston, Strasbourg and (...) Munich, between October 2013 and January 2015. It was eventually first presented during a conference at Queen Mary, University of London, 19–20 June 2015, on ‘Social Welfare, Justice and Distribution: Celebrating John Roemer’s Contributions to Economics, Political Philosophy and Political Science’, organized by Roberto Veneziani and Juan Moreno-Ternero. I am grateful to the participants for interesting reactions and comments, in particular Richard Arneson, Jon Elster, Marc Fleurbaey, Klaus Nehring and Gil Skillman. Jon Elster contacted Dagfinn Føllesdal, a well-known philosopher and a pre-eminent Quine scholar, who kindly responded to some queries. A more developed version was presented in Aix-en-Provence during the International Conference on Economic Philosophy and in Lund during the meeting of the Society for Social Choice and Welfare in June 2016. Comments of participants to these two events revealed to be very helpful, among which comments by Gilles Campagnolo, Christian List and John Weymark. While in Lund, I also greatly benefitted from conversations with Adrian Miroiu. Finally, I am very grateful to an Associate Editor of this journal for excellent suggestions and for detecting some very annoying slips. (shrink)
Juries, committees and experts panels commonly appraise things of one kind or another on the basis of grades awarded by several people. When everybody's grading thresholds are known to be the same, the results sometimes can be counted on to reflect the graders’ opinion. Otherwise, they often cannot. Under certain conditions, Arrow's ‘impossibility’ theorem entails that judgements reached by aggregating grades do not reliably track any collective sense of better and worse at all. These claims are made by adapting the (...) Arrow–Sen framework for social choice to study grading in groups. (shrink)
Among the possible solutions to the paradoxes of collective preferences, single-peakedness is significant because it has been associated to a suggestive conceptual interpretation: a single-peaked preference profile entails that, although individuals may disagree on which option is the best, they conceptualize the choice along a shared unique dimension, i.e. they agree on the rationale of the collective decision. In this article, we discuss the relationship between the structural property of singlepeakedness and its suggested interpretation as uni-dimensionality of a social choice. (...) In particular, we offer a formalization of the relationship between single-peakedness and its conceptual counterpart, we discuss their logical relations, and we question whether single-peakedness provides a rationale for collective choices. (shrink)
I propose a relevance-based independence axiom on how to aggregate individual yes/no judgments on given propositions into collective judgments: the collective judgment on a proposition depends only on people’s judgments on propositions which are relevant to that proposition. This axiom contrasts with the classical independence axiom: the collective judgment on a proposition depends only on people’s judgments on the same proposition. I generalize the premise-based rule and the sequential-priority rule to an arbitrary priority order of the propositions, instead of a (...) dichotomous premise/conclusion order resp. a linear priority order. I prove four impossibility theorems on relevance-based aggregation. One theorem simultaneously generalizes Arrow’s Theorem (in its general and indiﬀerence-free versions) and the well-known Arrow-like theorem in judgment aggregation. (shrink)
Arrow’s impossibility result stems chiefly from a combination of two requirements: independence and fixity. Independence says that the social choice is independent of individual preferences involving unavailable alternatives. Fixity says that the social choice is fixed by a social preference relation that is independent of what is available. Arrow found that requiring, further, that this relation be transitive yields impossibility. Here it is shown that allowing intransitive social indifference still permits only a vastly unsatisfactory system, a liberum veto oligarchy. Arrow’s (...) argument for independence, though, undermines any rationale for fixity. (shrink)
Kenneth Arrow’s “impossibility” theorem—or “general possibility” theorem, as he called it—answers a very basic question in the theory of collective decision-making. Say there are some alternatives to choose among. They could be policies, public projects, candidates in an election, distributions of income and labour requirements among the members of a society, or just about anything else. There are some people whose preferences will inform this choice, and the question is: which procedures are there for deriving, from what is known or (...) can be found out about their preferences, a collective or “social” ordering of the alternatives from better to worse? The answer is startling. Arrow’s theorem says there are no such procedures whatsoever—none, anyway, that satisfy certain apparently quite reasonable assumptions concerning the autonomy of the people and the rationality of their preferences. The technical framework in which Arrow gave the question of social orderings a precise sense and its rigorous answer is now widely used for studying problems in welfare economics. The impossibility theorem itself set much of the agenda for contemporary social choice theory. Arrow accomplished this while still a graduate student. In 1972, he received the Nobel Prize in economics for his contributions. (shrink)
This paper examines social choice theory with the strong Pareto principle. The notion of conditional decisiveness is introduced to clarify the underlying power structure behind strongly Paretian aggregation rules satisfying binary independence. We discuss the various degrees of social rationality: transitivity, semi-transitivity, the interval-order property, quasi-transitivity, and acyclicity.
In the theory of judgment aggregation, it is known for which agendas of propositions it is possible to aggregate individual judgments into collective ones in accordance with the Arrow-inspired requirements of universal domain, collective rationality, unanimity preservation, non-dictatorship and propositionwise independence. But it is only partially known (e.g., only in the monotonic case) for which agendas it is possible to respect additional requirements, notably non-oligarchy, anonymity, no individual veto power, or implication preservation. We fully characterize the agendas for which there (...) are such possibilities, thereby answering the most salient open questions about propositionwise judgment aggregation. Our results build on earlier results by Nehring and Puppe (2002), Nehring (2006), Dietrich and List (2007a) and Dokow and Holzman (2010a). (shrink)
It is argued in this paper that amalgamating confirmation from various sources is relevantly different from social-choice contexts, and that proving an impossibility theorem for aggregating confirmation measures directs attention to irrelevant issues.
Social choice theory is the study of collective decision processes and procedures. It is not a single theory, but a cluster of models and results concerning the aggregation of individual inputs (e.g., votes, preferences, judgments, welfare) into collective outputs (e.g., collective decisions, preferences, judgments, welfare). Central questions are: How can a group of individuals choose a winning outcome (e.g., policy, electoral candidate) from a given set of options? What are the properties of different voting systems? When is a voting system (...) democratic? How can a collective (e.g., electorate, legislature, collegial court, expert panel, or committee) arrive at coherent collective preferences or judgments on some issues, on the basis of its members' individual preferences or judgments? How can we rank different social alternatives in an order of social welfare? Social choice theorists study these questions not just by looking at examples, but by developing general models and proving theorems. (shrink)
Majority cycling and related social choice paradoxes are often thought to threaten the meaningfulness of democracy. But deliberation can prevent majority cycles – not by inducing unanimity, which is unrealistic, but by bringing preferences closer to single-peakedness. We present the first empirical test of this hypothesis, using data from Deliberative Polls. Comparing preferences before and after deliberation, we find increases in proximity to single-peakedness. The increases are greater for lower versus higher salience issues and for individuals who seem to have (...) deliberated more versus less effectively. They are not merely a byproduct of increased substantive agreement. Our results both refine and support the idea that deliberation, by increasing proximity to single-peakedness, provides an escape from the problem of majority cycling. (shrink)
By introducing elements of phenomenological philosophy to the analysis of human needs in economics; from Sartrean postulates as well as the nature and essence of individual’s needs, has been revealed a theorethical framework that serves to ponder human being’s existential behavior by means of their phenomenologic social choices and welfare. Defining a planning agent under strong assumptions of rationality and projective efficacious capabilities, the Arrow’s theorem has been proved for the economic agent aware of its finitude in this world.
Public deliberation has been defended as a rational and noncoercive way to overcome paradoxical results from democratic voting, by promoting consensus on the available alternatives on the political agenda. Some critics have argued that full consensus is too demanding and inimical to pluralism and have pointed out that single-peakedness, a much less stringent condition, is sufficient to overcome voting paradoxes. According to these accounts, deliberation can induce single-peakedness through the creation of a ‘meta-agreement’, that is, agreement on the dimension according (...) to which the issues at stake are ‘conceptualized’. We argue here that once all the conditions needed for deliberation to bring about single-peakedness through meta-agreement are unpacked and made explicit, meta-agreement turns out to be a highly demanding condition, and one that is very inhospitable to pluralism. (shrink)
The impossibility results in judgement aggregation show a clash between fair aggregation procedures and rational collective outcomes. In this paper, we are interested in analysing the notion of rational outcome by proposing a proof-theoretical understanding of collective rationality. In particular, we use the analysis of proofs and inferences provided by linear logic in order to define a fine-grained notion of group reasoning that allows for studying collective rationality with respect to a number of logics. We analyse the well-known paradoxes in (...) judgement aggregation and we pinpoint the reasoning steps that trigger the inconsistencies. Moreover, we extend the map of possibility and impossibility results in judgement aggregation by discussing the case of substructural logics. In particular, we show that there exist fragments of linear logic for which general possibility results can be obtained. (shrink)
It has been claimed that deliberation is capable of overcoming so- cial choice theory impossibility results, by bringing about single- peakedness. Our aim is to better understand the relationship be- tween single-peakedness and collective justifications of preferences.
The aim of this article is to introduce the theory of judgment aggregation, a growing interdisciplinary research area. The theory addresses the following question: How can a group of individuals make consistent collective judgments on a given set of propositions on the basis of the group members' individual judgments on them? I begin by explaining the observation that initially sparked the interest in judgment aggregation, the so-called "doctinal" and "discursive paradoxes". I then introduce the basic formal model of judgment aggregation, (...) which allows me to present some illustrative variants of a generic impossibility result. I subsequently turn to the question of how this impossibility result can be avoided, going through several possible escape routes. Finally, I relate the theory of judgment aggregation to other branches of aggregation theory. Rather than offering a comprehensive survey of the theory of judgment aggregation, I hope to introduce the theory in a succinct and pedagogical way, providing an illustrative rather than exhaustive coverage of some of its key ideas and results. (shrink)
This paper provides an introductory review of the theory of judgment aggregation. It introduces the paradoxes of majority voting that originally motivated the field, explains several key results on the impossibility of propositionwise judgment aggregation, presents a pedagogical proof of one of those results, discusses escape routes from the impossibility and relates judgment aggregation to some other salient aggregation problems, such as preference aggregation, abstract aggregation and probability aggregation. The present illustrative rather than exhaustive review is intended to give readers (...) new to the field of judgment aggregation a sense of this rapidly growing research area. (shrink)
Can we design a perfect democratic decision procedure? Condorcet famously observed that majority rule, our paradigmatic democratic procedure, has some desirable properties, but sometimes produces inconsistent outcomes. Revisiting Condorcet’s insights in light of recent work on the aggregation of judgments, I show that there is a conflict between three initially plausible requirements of democracy: “robustness to pluralism”, “basic majoritarianism”, and “collective rationality”. For all but the simplest collective decision problems, no decision procedure meets these three requirements at once; at most (...) two can be met together. This “democratic trilemma” raises the question of which requirement to give up. Since different answers correspond to different views about what matters most in a democracy, the trilemma suggests a map of the “logical space” in which different conceptions of democracy are located. It also sharpens our thinking about other impossibility problems of social choice and how to avoid them, by capturing a core structure many of these problems have in common. More broadly, it raises the idea of “cartography of logical space” in relation to contested political concepts. (shrink)
In judgment aggregation, unlike preference aggregation, not much is known about domain restrictions that guarantee consistent majority outcomes. We introduce several conditions on individual judgments su¢ - cient for consistent majority judgments. Some are based on global orders of propositions or individuals, others on local orders, still others not on orders at all. Some generalize classic social-choice-theoretic domain conditions, others have no counterpart. Our most general condition generalizes Sen’s triplewise value-restriction, itself the most general classic condition. We also prove a (...) new characterization theorem: for a large class of domains, if there exists any aggregation function satisfying some democratic conditions, then majority voting is the unique such function. Taken together, our results provide new support for the robustness of majority rule. (shrink)
Standard impossibility theorems on judgment aggregation over logically connected propositions either use a controversial systematicity condition or apply only to agendas of propositions with rich logical connections. Are there any serious impossibilities without these restrictions? We prove an impossibility theorem without requiring systematicity that applies to most standard agendas: Every judgment aggregation function (with rational inputs and outputs) satisfying a condition called unbiasedness is dictatorial (or effectively dictatorial if we remove one of the agenda conditions). Our agenda conditions are tight. (...) When applied illustratively to (strict) preference aggregation represented in our model, the result implies that every unbiased social welfare function with universal domain is effectively dictatorial. (shrink)
Amartya Sen has recently urged that political philosophers pay attention to social choice theory in their deliberations about justice. However, despite its merits, social choice theory is not standardly part of undergraduate political philosophy. One difficulty is that it involves symbolic logic and difficult concepts. We can reduce this challenge by making the material no harder than it needs to be. I consider the standard proof of Arrow’s Theorem, a seminal result. Kenneth Arrow does not explicate the role of the (...) irrelevance of independent alternatives. Sen and Wulf Gaertner have offered clarifications, but I shall elucidate the full role. (shrink)
This introduces the symposium on judgment aggregation. The theory of judgment aggregation asks how several individuals' judgments on some logically connected propositions can be aggregated into consistent collective judgments. The aim of this introduction is to show how ideas from the familiar theory of preference aggregation can be extended to this more general case. We first translate a proof of Arrow's impossibility theorem into the new setting, so as to motivate some of the central concepts and conditions leading to analogous (...) impossibilities, as discussed in the symposium. We then consider each of four possible escape-routes explored in the symposium. (shrink)
In this paper, I investigate the relationship between preference and judgment aggregation, using the notion of ranking judgment introduced in List and Pettit. Ranking judgments were introduced in order to state the logical connections between the impossibility theorem of aggregating sets of judgments and Arrow’s theorem. I present a proof of the theorem concerning ranking judgments as a corollary of Arrow’s theorem, extending the translation between preferences and judgments defined in List and Pettit to the conditions on the aggregation procedure.
Shows how, as a consequence of the Arrow Impossibility Theorem, objectivity in grading is chimerical, given a sufficiently knowledgeable teacher (of his students, not his subject) in a sufficiently small class. -/- PDF available from JStor only; permission to post full version previously granted by journal editors and publisher expired. -/- Unpublished reply posted gratis.
Our aim in this survey article is to provide an accessible overview of some key results and questions in the theory of judgment aggregation. We omit proofs and technical details, focusing instead on concepts and underlying ideas.
In solving judgment aggregation problems, groups often face constraints. Many decision problems can be modelled in terms the acceptance or rejection of certain propositions in a language, and constraints as propositions that the decisions should be consistent with. For example, court judgments in breach-of-contract cases should be consistent with the constraint that action and obligation are necessary and sufficient for liability; judgments on how to rank several options in an order of preference with the constraint of transitivity; and judgments on (...) budget items with budgetary constraints. Often more or less demanding constraints on decisions are imaginable. For instance, in preference ranking problems, the transitivity constraint is often contrasted with the weaker acyclicity constraint. In this paper, we make constraints explicit in judgment aggregation by relativizing the rationality conditions of consistency and deductive closure to a constraint set, whose variation yields more or less strong notions of rationality. We review several general results on judgment aggregation in light of such constraints. (shrink)
This book offers a systematic treatment of the requirements of democratic legitimacy. It argues that democratic procedures are essential for political legitimacy because of the need to respect value pluralism and because of the learning process that democratic decision-making enables. It proposes a framework for distinguishing among the different ways in which the requirements of democratic legitimacy have been interpreted. Peter then uses this framework to identify and defend what appears as the most plausible conception of democratic legitimacy. According to (...) this conception, democratic legitimacy requires that the decision-making process satisfies certain conditions of political and epistemic fairness. (shrink)