Are companies, churches, and states genuine agents? Or are they just collections of individuals that give a misleading impression of unity? This question is important, since the answer dictates how we should explain the behaviour of these entities and whether we should treat them as responsible and accountable on the model of individual agents. Group Agency offers a new approach to that question and is relevant, therefore, to a range of fields from philosophy to law, politics, and the social sciences. (...) Christian List and Philip Pettit argue that there really are group or corporate agents, over and above the individual agents who compose them, and that a proper approach to the social sciences, law, morality, and politics must take account of this fact. Unlike some earlier defences of group agency, their account is entirely unmysterious in character and, despite not being technically difficult, is grounded in cutting-edge work in social choice theory, economics, and philosophy. (shrink)
Philosophers have argued about the nature and the very existence of free will for centuries. Today, many scientists and scientifically minded commentators are skeptical that it exists, especially when it is understood to require the ability to choose between alternative possibilities. If the laws of physics govern everything that happens, they argue, then how can our choices be free? Believers in free will must be misled by habit, sentiment, or religious doctrine. Why Free Will Is Real defies scientific orthodoxy and (...) presents a bold new defense of free will in the same naturalistic terms that are usually deployed against it. -/- Unlike those who defend free will by giving up the idea that it requires alternative possibilities to choose from, Christian List retains this idea as central, resisting the tendency to defend free will by watering it down. He concedes that free will and its prerequisites—intentional agency, alternative possibilities, and causal control over our actions—cannot be found among the fundamental physical features of the natural world. But, he argues, that’s not where we should be looking. Free will is a “higher-level” phenomenon found at the level of psychology. It is like other phenomena that emerge from physical processes but are autonomous from them and not best understood in fundamental physical terms—like an ecosystem or the economy. When we discover it in its proper context, acknowledging that free will is real is not just scientifically respectable; it is indispensable for explaining our world. (shrink)
It is often argued that higher-level special-science properties cannot be causally efficacious since the lower-level physical properties on which they supervene are doing all the causal work. This claim is usually derived from an exclusion principle stating that if a higherlevel property F supervenes on a physical property F* that is causally sufficient for a property G, then F cannot cause G. We employ an account of causation as differencemaking to show that the truth or falsity of this principle is (...) a contingent matter and derive necessary and sufficient conditions under which a version of it holds. We argue that one important instance of the principle, far from undermining non-reductive physicalism, actually supports the causal autonomy of certain higher-level properties. (shrink)
I argue that free will and determinism are compatible, even when we take free will to require the ability to do otherwise and even when we interpret that ability modally, as the possibility of doing otherwise, and not just conditionally or dispositionally. My argument draws on a distinction between physical and agential possibility. Although in a deterministic world only one future sequence of events is physically possible for each state of the world, the more coarsely defined state of an agent (...) and his or her environment can be consistent with more than one such sequence, and thus different actions can be “agentially possible”. The agential perspective is supported by our best theories of human behaviour, and so we should take it at face value when we refer to what an agent can and cannot do. On the picture I defend, free will is not a physical phenomenon, but a higher-level one on a par with other higher-level phenomena such as agency and intentionality. (shrink)
This paper provides an introductory review of the theory of judgment aggregation. It introduces the paradoxes of majority voting that originally motivated the field, explains several key results on the impossibility of propositionwise judgment aggregation, presents a pedagogical proof of one of those results, discusses escape routes from the impossibility and relates judgment aggregation to some other salient aggregation problems, such as preference aggregation, abstract aggregation and probability aggregation. The present illustrative rather than exhaustive review is intended to give readers (...) new to the field of judgment aggregation a sense of this rapidly growing research area. (shrink)
Suppose that the members of a group each hold a rational set of judgments on some interconnected questions, and imagine that the group itself has to form a collective, rational set of judgments on those questions. How should it go about dealing with this task? We argue that the question raised is subject to a difficulty that has recently been noticed in discussion of the doctrinal paradox in jurisprudence. And we show that there is a general impossibility theorem that that (...) difficulty illustrates. Our paper describes this impossibility result and provides an exploration of its significance. The result naturally invites comparison with Kenneth Arrow's famous theorem (Arrow, 1963 and 1984; Sen, 1970) and we elaborate that comparison in a companion paper (List and Pettit, 2002). The paper is in four sections. The first section documents the need for various groups to aggregate its members' judgments; the second presents the discursive paradox; the third gives an informal statement of the more general impossibility result; the formal proof is presented in an appendix. The fourth section, finally, discusses some escape routes from that impossibility. (shrink)
This paper provides an introductory review of the theory of judgment aggregation. It introduces the paradoxes of majority voting that originally motivated the field, explains several key results on the impossibility of propositionwise judgment aggregation, presents a pedagogical proof of one of those results, discusses escape routes from the impossibility and relates judgment aggregation to some other salient aggregation problems, such as preference aggregation, abstract aggregation and probability aggregation. The present illustrative rather than exhaustive review is intended to give readers (...) new to the field of judgment aggregation a sense of this rapidly growing research area. (shrink)
The existence of group agents is relatively widely accepted. Examples are corporations, courts, NGOs, and even entire states. But should we also accept that there is such a thing as group consciousness? I give an overview of some of the key issues in this debate and sketch a tentative argument for the view that group agents lack phenomenal consciousness. In developing my argument, I draw on integrated information theory, a much-discussed theory of consciousness. I conclude by pointing out an implication (...) of my argument for the normative status of group agents. (shrink)
Suppose several individuals (e.g., experts on a panel) each assign probabilities to some events. How can these individual probability assignments be aggregated into a single collective probability assignment? This article reviews several proposed solutions to this problem. We focus on three salient proposals: linear pooling (the weighted or unweighted linear averaging of probabilities), geometric pooling (the weighted or unweighted geometric averaging of probabilities), and multiplicative pooling (where probabilities are multiplied rather than averaged). We present axiomatic characterisations of each class of (...) pooling functions (most of them classic, but one new) and argue that linear pooling can be justified procedurally, but not epistemically, while the other two pooling methods can be justified epistemically. The choice between them, in turn, depends on whether the individuals' probability assignments are based on shared information or on private information. We conclude by mentioning a number of other pooling methods. (shrink)
Scientists and philosophers frequently speak about levels of description, levels of explanation, and ontological levels. In this paper, I propose a unified framework for modelling levels. I give a general definition of a system of levels and show that it can accommodate descriptive, explanatory, and ontological notions of levels. I further illustrate the usefulness of this framework by applying it to some salient philosophical questions: (1) Is there a linear hierarchy of levels, with a fundamental level at the bottom? And (...) what does the answer to this question imply for physicalism, the thesis that everything supervenes on the physical? (2) Are there emergent properties? (3) Are higher-level descriptions reducible to lower-level ones? (4) Can the relationship between normative and non-normative domains be viewed as one involving levels? Although I use the terminology of “levels”, the proposed framework can also represent “scales”, “domains”, or “subject matters”, where these are not linearly but only partially ordered by relations of supervenience or inclusion. (shrink)
The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and (...) a moral status? I will tentatively defend the (increasingly widely held) view that, under certain conditions, artificial intelligent systems, like corporate entities, might qualify as responsible moral agents and as holders of limited rights and legal personhood. I will further suggest that regulators should permit the use of autonomous artificial systems in high-stakes settings only if they are engineered to function as moral (not just intentional) agents and/or there is some liability-transfer arrangement in place. I will finally raise the possibility that if artificial systems ever became phenomenally conscious, there might be a case for extending a stronger moral status to them, but argue that, as of now, this remains very hypothetical. (shrink)
We offer a new argument for the claim that there can be non-degenerate objective chance (“true randomness”) in a deterministic world. Using a formal model of the relationship between different levels of description of a system, we show how objective chance at a higher level can coexist with its absence at a lower level. Unlike previous arguments for the level-specificity of chance, our argument shows, in a precise sense, that higher-level chance does not collapse into epistemic probability, despite higher-level properties (...) supervening on lower-level ones. We show that the distinction between objective chance and epistemic probability can be drawn, and operationalized, at every level of description. There is, therefore, not a single distinction between objective and epistemic probability, but a family of such distinctions. (shrink)
This paper generalises the classical Condorcet jury theorem from majority voting over two options to plurality voting over multiple options. The paper further discusses the debate between epistemic and procedural democracy and situates its formal results in that debate. The paper finally compares a number of different social choice procedures for many-option choices in terms of their epistemic merits. An appendix explores the implications of some of the present mathematical results for the question of how probable majority cycles (as in (...) Condorcet's paradox) are in large electorates. (shrink)
Behaviourism is the view that preferences, beliefs, and other mental states in social-scientific theories are nothing but constructs re-describing people's behaviour. Mentalism is the view that they capture real phenomena, on a par with the unobservables in science, such as electrons and electromagnetic fields. While behaviourism has gone out of fashion in psychology, it remains influential in economics, especially in ‘revealed preference’ theory. We defend mentalism in economics, construed as a positive science, and show that it fits best scientific practice. (...) We distinguish mentalism from, and reject, the radical neuroeconomic view that behaviour should be explained in terms of brain processes, as distinct from mental states. (shrink)
Political science is divided between methodological individualists, who seek to explain political phenomena by reference to individuals and their interactions, and holists (or nonreductionists), who consider some higher-level social entities or properties such as states, institutions, or cultures ontologically or causally significant. We propose a reconciliation between these two perspectives, building on related work in philosophy. After laying out a taxonomy of different variants of each view, we observe that (i) although political phenomena result from underlying individual attitudes and behavior, (...) individual-level descriptions do not always capture all explanatorily salient properties, and (ii) nonreductionistic explanations are mandated when social regularities are robust to changes in their individual-level realization. We characterize the dividing line between phenomena requiring nonreductionistic explanation and phenomena permitting individualistic explanation and give examples from the study of ethnic conflicts, social-network theory, and international-relations theory. (shrink)
Much recent philosophical work on social freedom focuses on whether freedom should be understood as non-interference, in the liberal tradition associated with Isaiah Berlin, or as non-domination, in the republican tradition revived by Philip Pettit and Quentin Skinner. We defend a conception of freedom that lies between these two alternatives: freedom as independence. Like republican freedom, it demands the robust absence of relevant constraints on action. Unlike republican, and like liberal freedom, it is not moralized. We show that freedom as (...) independence retains the virtues of its liberal and republican counterparts while shedding their vices. Our aim is to put this conception of freedom more firmly on the map and to offer a novel perspective on the logical space in which different conceptions of freedom are located. (shrink)
We present a new “reason-based” approach to the formal representation of moral theories, drawing on recent decision-theoretic work. We show that any moral theory within a very large class can be represented in terms of two parameters: a specification of which properties of the objects of moral choice matter in any given context, and a specification of how these properties matter. Reason-based representations provide a very general taxonomy of moral theories, as differences among theories can be attributed to differences in (...) their two key parameters. We can thus formalize several distinctions, such as between consequentialist and non-consequentialist theories, between universalist and relativist theories, between agent-neutral and agent-relative theories, between monistic and pluralistic theories, between atomistic and holistic theories, and between theories with a teleological structure and those without. Reason-based representations also shed light on an important but under-appreciated phenomenon: the “underdetermination of moral theory by deontic content”. (shrink)
The ``doctrinal paradox'' or ``discursive dilemma'' shows that propositionwise majority voting over the judgments held by multiple individuals on some interconnected propositions can lead to inconsistent collective judgments on these propositions. List and Pettit (2002) have proved that this paradox illustrates a more general impossibility theorem showing that there exists no aggregation procedure that generally produces consistent collective judgments and satisfies certain minimal conditions. Although the paradox and the theorem concern the aggregation of judgments rather than preferences, they invite comparison (...) with two established results on the aggregation of preferences: the Condorcet paradox and Arrow's impossibility theorem. We may ask whether the new impossibility theorem is a special case of Arrow's theorem, or whether there are interesting disanalogies between the two results. In this paper, we compare the two theorems, and show that they are not straightforward corollaries of each other. We further suggest that, while the framework of preference aggregation can be mapped into the framework of judgment aggregation, there exists no obvious reverse mapping. Finally, we address one particular minimal condition that is used in both theorems – an independence condition – and suggest that this condition points towards a unifying property underlying both impossibility results. (shrink)
Political theorists have offered many accounts of collective decision-making under pluralism. I discuss a key dimension on which such accounts differ: the importance assigned not only to the choices made but also to the reasons underlying those choices. On that dimension, different accounts lie in between two extremes. The ‘minimal liberal account’ holds that collective decisions should be made only on practical actions or policies and that underlying reasons should be kept private. The ‘comprehensive deliberative account’ stresses the importance of (...) giving reasons for collective decisions, where such reasons should also be collectively decided. I compare these two accounts on the basis of a formal model developed in the growing literature on the ‘discursive dilemma’ and ‘judgment aggregation’ and address several questions: What is the trade-off between the (minimal liberal) demand for reaching agreement on outcomes and the (comprehensive deliberative) demand for reason-giving? How large should the ‘sphere of public reason’ be? When do the decision procedures suggested by the two accounts agree and when not? How good are these procedures at truthtracking on factual matters? What strategic incentives do they generate for decision-makers? My discussion identifies what is at stake in the choice between minimal liberal and comprehensive deliberative accounts of collective decisionmaking, and sheds light not only on these two ideal-typical accounts themselves, but also on many characteristics that intermediate accounts share with them. (shrink)
In response to recent work on the aggregation of individual judgments on logically connected propositions into collective judgments, it is often asked whether judgment aggregation is a special case of Arrowian preference aggregation. We argue for the converse claim. After proving two impossibility theorems on judgment aggregation (using "systematicity" and "independence" conditions, respectively), we construct an embedding of preference aggregation into judgment aggregation and prove Arrow’s theorem (stated for strict preferences) as a corollary of our second result. Although we thereby (...) provide a new proof of Arrow’s theorem, our main aim is to identify the analogue of Arrow’s theorem in judgment aggregation, to clarify the relation between judgment and preference aggregation, and to illustrate the generality of the judgment aggregation model. JEL Classi…cation: D70, D71.. (shrink)
There is a surprising disconnect between formal rational choice theory and philosophical work on reasons. The one is silent on the role of reasons in rational choices, the other rarely engages with the formal models of decision problems used by social scientists. To bridge this gap, we propose a new, reason-based theory of rational choice. At its core is an account of preference formation, according to which an agent’s preferences are determined by his or her motivating reasons, together with a (...) ‘weighing relation’ between different combinations of reasons. By explaining how someone’s preferences may vary with changes in his or her motivating reasons, our theory illuminates the relationship between deliberation about reasons and rational choices. Although primarily positive, the theory can also help us think about how those preferences and choices ought to respond to normative reasons. (shrink)
Agents are often assumed to have degrees of belief (“credences”) and also binary beliefs (“beliefs simpliciter”). How are these related to each other? A much-discussed answer asserts that it is rational to believe a proposition if and only if one has a high enough degree of belief in it. But this answer runs into the “lottery paradox”: the set of believed propositions may violate the key rationality conditions of consistency and deductive closure. In earlier work, we showed that this problem (...) generalizes: there exists no local function from degrees of belief to binary beliefs that satisfies some minimal conditions of rationality and non-triviality. “Locality” means that the binary belief in each proposition depends only on the degree of belief in that proposition, not on the degrees of belief in others. One might think that the impossibility can be avoided by dropping the assumption that binary beliefs are a function of degrees of belief. We prove that, even if we drop the “functionality” restriction, there still exists no local relation between degrees of belief and binary beliefs that satisfies some minimal conditions. Thus functionality is not the source of the impossibility; its source is the condition of locality. If there is any non-trivial relation between degrees of belief and binary beliefs at all, it must be a “holistic” one. We explore several concrete forms this “holistic” relation could take. (shrink)
This paper provides an introductory review of the theory of judgment aggregation. It introduces the paradoxes of majority voting that originally motivated the field, explains several key results on the impossibility of propositionwise judgment aggregation, presents a pedagogical proof of one of those results, discusses escape routes from the impossibility and relates judgment aggregation to some other salient aggregation problems, such as preference aggregation, abstract aggregation and probability aggregation. The present illustrative rather than exhaustive review is intended to give readers (...) new to the field of judgment aggregation a sense of this rapidly growing research area. (shrink)
We introduce a “reason-based” framework for explaining and predicting individual choices. It captures the idea that a decision-maker focuses on some but not all properties of the options and chooses an option whose motivationally salient properties he/she most prefers. Reason-based explanations allow us to distinguish between two kinds of context-dependent choice: the motivationally salient properties may (i) vary across choice contexts, and (ii) include not only “intrinsic” properties of the options, but also “context-related” properties. Our framework can accommodate boundedly rational (...) and sophisticatedly rational choice. Since properties can be recombined in new ways, it also offers resources for predicting choices in unobserved contexts. (shrink)
The systems studied in the special sciences are often said to be causally autonomous, in the sense that their higher-level properties have causal powers that are independent of the causal powers of their more basic physical properties. This view was espoused by the British emergentists, who claimed that systems achieving a certain level of organizational complexity have distinctive causal powers that emerge from their constituent elements but do not derive from them. More recently, non-reductive physicalists have espoused a similar view (...) about the causal autonomy of special-science properties. They argue that since these properties can typically have multiple physical realizations, they are not identical to physical properties, and further they possess causal powers that differ from those of their physical realisers. Despite the orthodoxy of this view, it is hard to find a clear exposition of its meaning or a defence of it in terms of a well-motivated account of causation. In this paper, we aim to address this gap in the literature by clarifying what is implied by the doctrine of the causal autonomy of special-science properties and by defending the doctrine using a prominent theory of causation from the philosophy of science. (shrink)
Our aim in this survey article is to provide an accessible overview of some key results and questions in the theory of judgment aggregation. We omit proofs and technical details, focusing instead on concepts and underlying ideas.
This paper provides an introductory review of the theory of judgment aggregation. It introduces the paradoxes of majority voting that originally motivated the field, explains several key results on the impossibility of propositionwise judgment aggregation, presents a pedagogical proof of one of those results, discusses escape routes from the impossibility and relates judgment aggregation to some other salient aggregation problems, such as preference aggregation, abstract aggregation and probability aggregation. The present illustrative rather than exhaustive review is intended to give readers (...) new to the field of judgment aggregation a sense of this rapidly growing research area. (shrink)
Scientists and philosophers frequently speak about levels of description, levels of explanation, and ontological levels. This paper presents a framework for studying levels. I give a general definition of a system of levels and discuss several applications, some of which refer to descriptive or explanatory levels while others refer to ontological levels. I illustrate the usefulness of this framework by bringing it to bear on some familiar philosophical questions. Is there a hierarchy of levels, with a fundamental level at the (...) bottom? And what does the answer to this question imply for physicalism, the thesis that everything supervenes on the physical? Are there emergent higher-level properties? Are higher-level descriptions reducible to lower-level ones? Can the relationship between normative and non-normative domains be viewed as one involving levels? And might a levelled framework shed light on the relationship between third-personal and first-personal phenomena? (shrink)
This paper offers a comparison of three different kinds of collective attitudes: aggregate, common, and corporate attitudes. They differ not only in their relationship to individual attitudes—e.g., whether they are “reducible” to individual attitudes—but also in the roles they play in relation to the collectives to which they are ascribed. The failure to distinguish them can lead to confusion, in informal talk as well as in the social sciences. So, the paper’s message is an appeal for disambiguation.
Are groups ever capable of bearing responsibility, over and above their individual members? This chapter discusses and defends the view that certain organized collectives – namely, those that qualify as group moral agents – can be held responsible for their actions, and that group responsibility is not reducible to individual responsibility. The view has important implications. It supports the recognition of corporate civil and even criminal liability in our legal systems, and it suggests that, by recognizing group agents as loci (...) of responsibility, we may be able to avoid “responsibility gaps” in some cases of collectively caused harms for which there is a shortfall of individual responsibility. The chapter further asks whether the view that certain groups are responsible agents commits us to the view that those groups should also be given rights of their own and gives a qualified negative answer. (shrink)
Which rules for aggregating judgments on logically connected propositions are manipulable and which not? In this paper, we introduce a preference-free concept of non-manipulability and contrast it with a preference-theoretic concept of strategy-proofness. We characterize all non-manipulable and all strategy-proof judgment aggregation rules and prove an impossibility theorem similar to the Gibbard--Satterthwaite theorem. We also discuss weaker forms of non-manipulability and strategy-proofness. Comparing two frequently discussed aggregation rules, we show that “conclusion-based voting” is less vulnerable to manipulation than “premise-based voting”, (...) which is strategy-proof only for “reason-oriented” individuals. Surprisingly, for “outcome-oriented” individuals, the two rules are strategically equivalent, generating identical judgments in equilibrium. Our results introduce game-theoretic considerations into judgment aggregation and have implications for debates on deliberative democracy. (shrink)
How can different individuals' probability assignments to some events be aggregated into a collective probability assignment? Classic results on this problem assume that the set of relevant events -- the agenda -- is a sigma-algebra and is thus closed under disjunction (union) and conjunction (intersection). We drop this demanding assumption and explore probabilistic opinion pooling on general agendas. One might be interested in the probability of rain and that of an interest-rate increase, but not in the probability of rain or (...) an interest-rate increase. We characterize linear pooling and neutral pooling for general agendas, with classic results as special cases for agendas that are sigma-algebras. As an illustrative application, we also consider probabilistic preference aggregation. Finally, we compare our results with existing results on binary judgment aggregation and Arrovian preference aggregation. This paper is the first of two self-contained, but technically related companion papers inspired by binary judgment-aggregation theory. (shrink)
In this paper, I introduce the emerging theory of judgment aggregation as a framework for studying institutional design in social epistemology. When a group or collective organization is given an epistemic task, its performance may depend on its ‘aggregation procedure’, i.e. its mechanism for aggregating the group members’ individual beliefs or judgments into corresponding collective beliefs or judgments endorsed by the group as a whole. I argue that a group’s aggregation procedure plays an important role in determining whether the group (...) can meet two challenges: the ‘rationality challenge’ and the ‘knowledge challenge’. The rationality challenge arises when a group is required to endorse consistent beliefs or judgments; the knowledge challenge arises when the group’s beliefs or judgments are required to track certain truths. My discussion seeks to identify those properties of an aggregation procedure that affect a group’s success at meeting each of the two challenges. (shrink)
This paper provides an introductory review of the theory of judgment aggregation. It introduces the paradoxes of majority voting that originally motivated the field, explains several key results on the impossibility of propositionwise judgment aggregation, presents a pedagogical proof of one of those results, discusses escape routes from the impossibility and relates judgment aggregation to some other salient aggregation problems, such as preference aggregation, abstract aggregation and probability aggregation. The present illustrative rather than exhaustive review is intended to give readers (...) new to the field of judgment aggregation a sense of this rapidly growing research area. (shrink)
The two most influential traditions of contemporary theorizing about democracy, social choice theory and deliberative democracy, are generally thought to be at loggerheads, in that the former demonstrates the impossibility, instability or meaninglessness of the rational collective outcomes sought by the latter. We argue that the two traditions can be reconciled. After expounding the central Arrow and Gibbard-Satterthwaite impossibility results, we reassess their implications, identifying the conditions under which meaningful democratic decision making is possible. We argue that deliberation can promote (...) these conditions, and hence that social choice theory suggests not that democratic decision making is impossible, but rather that democracy must have a deliberative aspect. (shrink)
In normative political theory, it is widely accepted that democracy cannot be reduced to voting alone, but that it requires deliberation. In formal social choice theory, by contrast, the study of democracy has focused primarily on the aggregation of individual opinions into collective decisions, typically through voting. While the literature on deliberation has an optimistic flavour, the literature on social choice is more mixed. It is centred around several paradoxes and impossibility results identifying conflicts between different intuitively plausible desiderata. In (...) recent years, there has been a growing dialogue between the two literatures. This paper discusses the connections between them. Important insights are that (i) deliberation can complement aggregation and open up an escape route from some of its negative results; and (ii) the formal models of social choice theory can shed light on some aspects of deliberation, such as the nature of deliberation-induced opinion change. (shrink)
We present a general framework for representing belief-revision rules and use it to characterize Bayes's rule as a classical example and Jeffrey's rule as a non-classical one. In Jeffrey's rule, the input to a belief revision is not simply the information that some event has occurred, as in Bayes's rule, but a new assignment of probabilities to some events. Despite their differences, Bayes's and Jeffrey's rules can be characterized in terms of the same axioms: "responsiveness", which requires that revised beliefs (...) incorporate what has been learnt, and "conservativeness", which requires that beliefs on which the learnt input is "silent" do not change. To illustrate the use of non-Bayesian belief revision in economic theory, we sketch a simple decision-theoretic application. (shrink)
This paper provides an introductory review of the theory of judgment aggregation. It introduces the paradoxes of majority voting that originally motivated the field, explains several key results on the impossibility of propositionwise judgment aggregation, presents a pedagogical proof of one of those results, discusses escape routes from the impossibility and relates judgment aggregation to some other salient aggregation problems, such as preference aggregation, abstract aggregation and probability aggregation. The present illustrative rather than exhaustive review is intended to give readers (...) new to the field of judgment aggregation a sense of this rapidly growing research area. (shrink)
Scientists often think of the world as a dynamical system, a stochastic process, or a generalization of such a system. Prominent examples of systems are the system of planets orbiting the sun or any other classical mechanical system, a hydrogen atom or any other quantum–mechanical system, and the earth’s atmosphere or any other statistical mechanical system. We introduce a general and unified framework for describing such systems and show how it can be used to examine some familiar philosophical questions, including (...) the following: how can we define nomological possibility, necessity, determinism, and indeterminism; what are symmetries and laws; what regularities must a system display to make scientific inference possible; how might principles of parsimony such as Occam’s Razor help when we make such inferences; what is the role of space and time in a system; and might they be emergent features? Our framework is intended to serve as a toolbox for the formal analysis of systems that is applicable in several areas of philosophy. (shrink)
We offer a critical assessment of the “exclusion argument” against free will, which may be summarized by the slogan: “My brain made me do it, therefore I couldn't have been free”. While the exclusion argument has received much attention in debates about mental causation (“could my mental states ever cause my actions?”), it is seldom discussed in relation to free will. However, the argument informally underlies many neuroscientific discussions of free will, especially the claim that advances in neuroscience seriously challenge (...) our belief in free will. We introduce two distinct versions of the argument, discuss several unsuccessful responses to it, and then present our preferred response. This involves showing that a key premise – the “exclusion principle” – is false under what we take to be the most natural account of causation in the context of agency: the difference-making account. We finally revisit the debate about neuroscience and free will. (shrink)
Under the independence and competence assumptions of Condorcet’s classical jury model, the probability of a correct majority decision converges to certainty as the jury size increases, a seemingly unrealistic result. Using Bayesian networks, we argue that the model’s independence assumption requires that the state of the world (guilty or not guilty) is the latest common cause of all jurors’ votes. But often – arguably in all courtroom cases and in many expert panels – the latest such common cause is a (...) shared ‘body of evidence’ observed by the jurors. In the corresponding Bayesian network, the votes are direct descendants not of the state of the world, but of the body of evidence, which in turn is a direct descendant of the state of the world. We develop a model of jury decisions based on this Bayesian network. Our model permits the possibility of misleading evidence, even for a maximally competent observer, which cannot easily be accommodated in the classical model. We prove that (i) the probability of a correct majority verdict converges to the probability that the body of evidence is not misleading, a value typically below 1; (ii) depending on the required threshold of ‘no reasonable doubt’, it may be impossible, even in an arbitrarily large jury, to establish guilt of a defendant ‘beyond any reasonable doubt’. (shrink)
Social choice theory is the study of collective decision processes and procedures. It is not a single theory, but a cluster of models and results concerning the aggregation of individual inputs (e.g., votes, preferences, judgments, welfare) into collective outputs (e.g., collective decisions, preferences, judgments, welfare). Central questions are: How can a group of individuals choose a winning outcome (e.g., policy, electoral candidate) from a given set of options? What are the properties of different voting systems? When is a voting system (...) democratic? How can a collective (e.g., electorate, legislature, collegial court, expert panel, or committee) arrive at coherent collective preferences or judgments on some issues, on the basis of its members' individual preferences or judgments? How can we rank different social alternatives in an order of social welfare? Social choice theorists study these questions not just by looking at examples, but by developing general models and proving theorems. (shrink)
Several recent results on the aggregation of judgments over logically connected propositions show that, under certain conditions, dictatorships are the only propositionwise aggregation functions generating fully rational (i.e., complete and consistent) collective judgments. A frequently mentioned route to avoid dictatorships is to allow incomplete collective judgments. We show that this route does not lead very far: we obtain oligarchies rather than dictatorships if instead of full rationality we merely require that collective judgments be deductively closed, arguably a minimal condition of (...) rationality, compatible even with empty judgment sets. We derive several characterizations of oligarchies and provide illustrative applications to Arrowian preference aggregation and Kasher and Rubinsteinís group identification problem. (shrink)
In the emerging literature on judgment aggregation over logically connected proposi- tions, expert rights or liberal rights have not been investigated yet. A group making collective judgments may assign individual members or subgroups with expert know- ledge on, or particularly affected by, certain propositions the right to determine the collective judgment on those propositions. We identify a problem that generalizes Sen's 'liberal paradox'. Under plausible conditions, the assignment of rights to two or more individuals or subgroups is inconsistent with the (...) unanimity principle, whereby unanimously accepted propositions are collectively accepted. The inconsistency can be avoided if individual judgments or rights satisfy special conditions. (shrink)
The exclusion argument is widely thought to put considerable pressure on dualism if not to refute it outright. We argue to the contrary that, whether or not their position is ultimately true, dualists have a plausible response. The response focuses on the notion of ‘distinctness’ as it occurs in the argument: if 'distinctness' is understood one way, the exclusion principle on which the argument is founded can be denied by the dualist; if it is understood another way, the argument is (...) not persuasive. (shrink)
I model sequential decisions over multiple interconnected propositions and investigate path-dependence in such decisions. The propositions and their interconnections are represented in propositional logic. A sequential decision process is path-dependent if its outcome depends on the order in which the propositions are considered. Assuming that earlier decisions constrain later ones, I prove three main results: First, certain rationality violations by the decision-making agent—individual or group—are necessary and sufficient for path-dependence. Second, under some conditions, path-dependence is unavoidable in decisions made by (...) groups. Third, path-dependence makes decisions vulnerable to strategic agenda setting and strategic voting. I also discuss escape routes from path-dependence. My results are relevant to discussions on collective consistency and reason-based decision-making, focusing not only on outcomes, but also on underlying reasons, beliefs, and constraints. (shrink)