This book examines the Condorcet Jury Theorem and how its assumptions can be applicable to the real world. It will use the theorem to assess various familiar political practices and alternative institutional arrangements, revealing how best to take advantage of the truth-tracking potential of majoritarian democracy.
Political science is divided between methodological individualists, who seek to explain political phenomena by reference to individuals and their interactions, and holists (or nonreductionists), who consider some higher-level social entities or properties such as states, institutions, or cultures ontologically or causally significant. We propose a reconciliation between these two perspectives, building on related work in philosophy. After laying out a taxonomy of different variants of each view, we observe that (i) although political phenomena result from underlying individual attitudes and behavior, (...) individual-level descriptions do not always capture all explanatorily salient properties, and (ii) nonreductionistic explanations are mandated when social regularities are robust to changes in their individual-level realization. We characterize the dividing line between phenomena requiring nonreductionistic explanation and phenomena permitting individualistic explanation and give examples from the study of ethnic conflicts, social-network theory, and international-relations theory. (shrink)
The contemporary theory of epistemic democracy often draws on the Condorcet Jury Theorem to formally justify the ‘wisdom of crowds’. But this theorem is inapplicable in its current form, since one of its premises – voter independence – is notoriously violated. This premise carries responsibility for the theorem's misleading conclusion that ‘large crowds are infallible’. We prove a more useful jury theorem: under defensible premises, ‘large crowds are fallible but better than small groups’. This theorem rehabilitates the importance of deliberation (...) and education, which appear inessential in the classical jury framework. Our theorem is related to Ladha's (1993) seminal jury theorem for interchangeable (‘indistinguishable’) voters based on de Finetti's Theorem. We also prove a more general and simpler such jury theorem. (shrink)
Democratic decision-making is often defended on grounds of the ‘wisdom of crowds’: decisions are more likely to be correct if they are based on many independent opinions, so a typical argument in social epistemology. But what does it mean to have independent opinions? Opinions can be probabilistically dependent even if individuals form their opinion in causal isolation from each other. We distinguish four probabilistic notions of opinion independence. Which of them holds depends on how individuals are causally affected by environmental (...) factors such as commonly perceived evidence. In a general theorem, we identify causal conditions guaranteeing each kind of opinion independence. These results have implications for whether and how ‘wisdom of crowds’ arguments are possible, and how truth-conducive institutions can be designed. (shrink)
Jury theorems are mathematical theorems about the ability of collectives to make correct decisions. Several jury theorems carry the optimistic message that, in suitable circumstances, ‘crowds are wise’: many individuals together (using, for instance, majority voting) tend to make good decisions, outperforming fewer or just one individual. Jury theorems form the technical core of epistemic arguments for democracy, and provide probabilistic tools for reasoning about the epistemic quality of collective decisions. The popularity of jury theorems spans across various disciplines such (...) as economics, political science, philosophy, and computer science. This entry reviews and critically assesses a variety of jury theorems. It first discusses Condorcet's initial jury theorem, and then progressively introduces jury theorems with more appropriate premises and conclusions. It explains the philosophical foundations, and relates jury theorems to diversity, deliberation, shared evidence, shared perspectives, and other phenomena. It finally connects jury theorems to their historical background and to democratic theory, social epistemology, and social choice theory. (shrink)
In this paper we develop a new methodology for normative theorising, which we call Directed Reflective Equilibrium. Directed Reflective Equilibrium is based on a taxonomy that distinguishes between a number of different functions of hypothetical cases, including two dimensions that we call representation and elicitation. Like its predecessor, Directed Reflective Equilibrium accepts that neither intuitions nor basic principles are immune to revision and that our commitments on various levels of philosophical enquiry should be brought into equilibrium. However, it also offers (...) guidance about how different types of cases ought to be sequenced to achieve this result. We argue that this ‘directional’ approach improves, in various ways, upon the non-directional approach of traditional Reflective Equilibrium. (shrink)
Does pre-voting group deliberation increase majority competence? To address this question, we develop a probabilistic model of opinion formation and deliberation. Two new jury theorems, one pre-deliberation and one post-deliberation, suggest that deliberation is beneficial. Successful deliberation mitigates three voting failures: (1) overcounting widespread evidence, (2) neglecting evidential inequality, and (3) neglecting evidential complementarity. Simulations and theoretic arguments confirm this. But there are five systematic exceptions where deliberation reduces majority competence, always by increasing failure (1). Our analysis recommends deliberation that (...) is 'participatory', 'even', but possibly 'unequal', i.e., that involves substantive sharing, privileges no evidences, but possibly privileges some persons. (shrink)
We give a review and critique of jury theorems from a social-epistemology perspective, covering Condorcet’s (1785) classic theorem and several later refinements and departures. We assess the plausibility of the conclusions and premises featuring in jury theorems and evaluate the potential of such theorems to serve as formal arguments for the ‘wisdom of crowds’. In particular, we argue (i) that there is a fundamental tension between voters’ independence and voters’ competence, hence between the two premises of most jury theorems; (ii) (...) that the (asymptotic) conclusion that ‘huge groups are infallible’, reached by many jury theorems, is an artifact of unjustified premises; and (iii) that the (nonasymptotic) conclusion that ‘larger groups are more reliable’, also reached by many jury theorems, is not an artifact and should be regarded as the more adequate formal rendition of the ‘wisdom of crowds’. (shrink)
The Federalist, justifying the Electoral College to elect the president, claimed that a small group of more informed individuals would make a better decision than the general mass. But the Condorcet Jury Theorem tells us that the more independent, better-than-random voters there are, the more likely it will be that the majority among them will be correct. The question thus arises as to how much better, on average, members of the smaller group would have to be to compensate for the (...) epistemic costs of making decisions on the basis of that many fewer votes. This question is explored in the contexts of referendum democracy, delegate-style representative democracy, and trustee-style representative democracy. (shrink)
Solidarity is supposed to facilitate collective action. We argue that it can also help overcome false consciousness. Groups practice if they pool information about what is in their true interest and how to vote accordingly. The more numerous can in this way overcome the but only if they are minimally confident with whom they share the same interests and only if they are better-than-random in voting for the alternative that promotes their interests. Being more cohesive and more competent than the (...) Masses, the Elites can employ the same strategy perhaps all the more effectively. But so long as the Masses practice epistemic solidarity they will almost always win, whether or not the Elites do. By enriching the traditional framework of the Condorcet Jury Theorem with group-specific standards of correctness, we investigate how groups can organize to support the alternatives truly in their interests. (shrink)
Many companies offer their customers voluntary carbon ‘offset’ certificates to compensate for greenhouse gas emissions. Voluntary offset certificates are cheap because the demand for them is low, allowing consumers to compensate for their emissions without significant sacrifices. Regarding the distribution of emission reduction responsibilities I argue that excess emissions are permissible if they are offset properly. However, if individuals buy offsets only because they are cheap, they fail to be robustly motivated to choose a permissible course of action.This suspected lack (...) of robust motivation raises both pragmatic questions about the functioning of offsetting schemes and moral questions about the worth of such unstable motives. The analysis provided here also has wider implications for the normative analysis of partial compliance and ‘many hands’ problems, especially for those cases where compliance levels and costs interact. (shrink)
To find out what is in one’s own best interest, it is helpful to ask one’s epistemic peers. However, identifying one’s epistemic peers is not a trivial task. I consider a stylized political setting, an electoral competition of ‘Masses’ and ‘Elites’. To succeed, the Masses need to know which alternative on offer is truly in their interest. To find out, the Masses can pool their privately held information in a pre-election ballot, provided that they can reliably find out with whom (...) they should pool information. I investigate the process of finding the relevant peer group for information pooling by modelling this group formation process as dynamic network change. The simulations show that the Masses can succeed in finding the right peers, but they also suggest reasons why the Elites may often be more successful. This phenomenon generalizes to the notion of Epistemic Network Injustice. Such injustice arises when a subset of citizens is systematically deprived of connections to helpful epistemic peers, leading to their reduced political influence. Epistemic Network Injustice is a new form of epistemic injustice, related to but distinct from the notion introduced by Miranda Fricker. (shrink)
In 'A Constitution of Many Minds' Cass Sunstein argues that the three major approaches to constitutional interpretation – Traditionalism, Populism and Cosmopolitanism – all rely on some variation of a ‘many-minds’ argument. Here we assess each of these claims through the lens of the Condorcet Jury Theorem. In regard to the first two approaches we explore the implications of sequential influence among courts (past and foreign, respectively). In regard to the Populist approach, we consider the influence of opinion leaders.
In recent years, judgement aggregation has emerged as an important area of social choice theory. Judgement aggregation is concerned with aggregating sets of individual judgements over logically connected propositions into a set of collective judgements. It has been shown that even seemingly weak conditions on the aggregation function make it impossible to find functions that produce rational collective judgements from all possible rational individual judgements. This implies that the step from individual judgements to collective judgements requires trade-offs between different desiderata, (...) such as universal domain, rationality, epistemological quality, and unbiasedness. These dilemmas challenge us to decide which conditions we should relax. The typical application for judgement aggregation is the problem of group decision making. Juries and expert committees are the stock examples. However, the relevance of judgement aggregation goes beyond these cases. In this survey I review some core results in the field of judgement aggregation and social epistemology and discuss their implications for the analysis of distributed thinking. (shrink)
The advent of artificial intelligence (AI) challenges political theorists to think about data ownership and policymakers to regulate the collection and use of public data. AI producers benefit from free public data for training their systems while retaining the profits. We argue against the view that the use of public data must be free. The proponents of unconstrained use point out that consuming data does not diminish its quality and that information is in ample supply. Therefore, they suggest, publicly available (...) data should be free. We present two objections. First, allowing free data use promotes unwanted inequality. Second, contributors of information did not and could not anticipate that their contribution would be used to train AI systems. Therefore, charging for extensive data use is pro tanto permissible and desirable. We discuss policy implications and propose a progressive data use tax to counter the inequality arising. (shrink)
Many goods are distributed by processes that involve randomness. In lotteries, randomness is used to promote fairness. When taking social risks, randomness is a feature of the process. The losers of such decisions ought to be given a reason why they should accept the outcome. Surprisingly, good reasons demand more than merely equal ex ante chances. What is also required is a true statement of the form: ‘the result could easily have gone the other way and you could have been (...) the winner’. This rules in standard lotteries but rules out many lotteries based on merely epistemic probability. (shrink)
A link between populism and social media is often suspected. This paper spells out a set of possible mechanisms underpinning this link: that social media changes the communication structure of the public sphere, making it harder for citizens to obtain evidence that refutes populist assumptions. By developing a model of the public sphere, four core functions of the public sphere are identified: exposing citizens to diverse information, promoting equality of deliberative opportunity, creating deliberative transparency, and producing common knowledge. A wellworking (...) public sphere allows citizens to learn that there are genuine disagreements among citizens that are held in good faith. Social media makes it harder to gain this insight, opening the door for populist ideology. (shrink)
Axelrod (The evolution of cooperation, 1984) and others explain how cooperation can emerge in repeated 2-person prisoner’s dilemmas. But in public good games with anonymous contributions, we expect a breakdown of cooperation because direct reciprocity fails. However, if agents are situated in a social network determining which agents interact, and if they can influence the network, then cooperation can be a viable strategy. Social networks are modelled as graphs. Agents play public good games with their neighbours. After each game, they (...) can terminate connections to others, and new connections are created. Cooperative agents do well because they manage to cluster with cooperators and avoid defectors. Computer simulations demonstrate that group formation and exclusion are powerful mechanisms to promote cooperation in dilemma situations. This explains why social dilemmas can often be solved if agents can choose with whom they interact. (shrink)
We propose a cognitive-dissonance model of norm compliance to identify conditions for selfishly biased information acquisition. The model distinguishes between: (i) objective norm compliers, for whom the right action is a function of the state of the world; (ii) subjective norm compliers, for whom it is a function of their belief. The former seek as much information as possible; the latter acquire only information that lowers, in expected terms, normative demands. The source of ‘moral wiggle room’ is not belief manipulation, (...) but the coarseness of normative prescriptions under conditions of uncertainty. In a novel experimental setup, we find evidence for such strategic information uptake. Our results suggest that attempts to change behavior by subjecting individuals to norms can lead to biased information acquisition instead of compliance. (shrink)
Replying to my earlier article `Translucency, Assortation, and Information Pooling: How Groups Solve Social Dilemmas', Robert Goodin examines the normative implications of the rule `cooperate with those whose inclusion benefits the larger scheme of cooperation', and gives several reasons for why the conversion of justice into a club good is normatively unappealing. This reply to Goodin discusses whether the rule leads to an exclusion of poor agents, whether a group should hire agents to detect free-riders, and how a group should (...) deal with naive cooperators. The rule can be defended as an enforcement mechanism in some cases, but it is normatively unappealing as a theory of justice. Key Words: club • public good • cooperation • justice • exclusion • Donald Regan. (shrink)
In one-shot public goods dilemmas, defection is the strictly dominant strategy. However, agents with cooperative strategies can do well if (1) agents are `translucent' (that is, if agents can fallibly recognize the strategy other agents play ex ante ) and (2) an institutional structure allows `assortation' such that cooperative agents can increase the likelihood of playing with their own kind. The model developed in this article shows that even weak levels of translucency suffice if cooperators are able to pool their (...) information to exclude defectors. Computer simulations confirm this claim. The results imply that conditional cooperation can be a successful strategy given translucency and `assortation', even if the game has a one-shot character. The article discusses implications for moral theory against the backdrop of `virtual self-regard' and the concept of moral integrity. Key Words: public good game theory prisoner's dilemma translucency assortation group formation group identity integrity altruism. (shrink)