The author defends the ancient claim that justice is at bottom a body of social conventions. Recent analytical and empirical concepts and results from the social sciences together with insights and arguments of past masters of moral and political philosophy are integrated into a new game-theoretic conventionalist analysis of justice.
Aconvention is a state in which agents coordinate their activity, not as the result of an explicit agreement, but because their expectations are aligned so that each individual believes that all will act so as to achieve coordination for mutual benefit. Since agents are said to follow a convention if they coordinate without explicit agreement, the notion raises fundamental questions: (1) Why do certain conventions remain stable over time?, and (2) How does a convention emerge in the first place? In (...) a pioneering study, Lewis (1969) addresses these questions by applyingnoncooperative game theory. Lewis defines a convention as aNash coordination equilibrium of a noncooperative game that issalient, that is, it is somehow conspicuous to the agents so that all expect one another to conform with the equilibrium. This paper presents a new game theoretic definition of conventions, which formalizes the notion of salience and which also generalizes the class of conventions Lewis discusses in his work. I define a convention as acorrelated equilibrium (Aumann 1974, 1987) satisfying apublic intentions criterion: Every agent wants his intended action to becommon knowledge. I argue that many conventions correspond to correlated equilibria that are not Nash equilibria, and that this is consistent with Lewis' general viewpoint. Finally, I argue that game theoretic characterizations of convention, such as Lewis' and my own, help to explain a convention's stability, but that a fully satisfactory account of the emergence of convention requires a theory of equilibrium selection beyond the scope of Lewis' work. (shrink)
I reply to commentaries by Justin Bruner, Robert Sugden and Gerald Gaus. My response to Bruner focuses on conventions of bargaining problems and arguments for characterizing the just conventions of these problems as monotone path solutions. My response to Sugden focuses on how the laws of humanity present in Hume’s discussion of vulnerable individuals might be incorporated into my own proposed account of justice as mutual advantage. My response to Gaus focuses on whether or not my account of justice as (...) mutual advantage can incorporate deep differences in values across subgroups of a larger society. (shrink)
One does not simply predict where the other will go, which is wherever the first predicts the second to predict the first to go, and so ad infinitum. Not "What would I do if I were she?" but "What would I do if I were she wondering what she would do if she were wondering what I would do if I were she...?".
I propose a dynamical analysis of interaction in anarchy, and argue that this kind of dynamical analysis is a more promising route to predicting the outcome of anarchy than the more traditional a priori analyses of anarchy in the literature. I criticize previous a priori analyses of anarchy on the grounds that these analyses assume that the individuals in anarchy share a unique set of preferences over the possible outcomes of war, peace, exploiting others and suffering exploitation. Following Hobbes' classic (...) analysis of anarchy, I maintain that typically in anarchy some moderate individuals will most desire mutual cooperation while other dominators will most desire to exploit others' cooperation. I argue that once one allows for different types of individuals in anarchy, any a priori analysis of anarchy requires unrealistic assumptions regarding the agents' common knowledge of their situation. However, this move also suggests a dynamical analysis of anarchy, one that assumes no common knowledge. In the Variable Anticipation threshold model developed here, individuals modify their behavior as they learn from repeated interactions. I present specific instances of this model where the individuals in anarchy converge to different equilibria corresponding to either peace or war, depending on the initial conditions. I show that individuals are liable to converge to Hobbes' war of all against all even if only a small percentage of are dominators. The presence of only a few “nasty” individuals gradually drives all, including those inclined to be “nicer”, to imitate the “nasty” conduct of these few. This dynamic analysis suggests that the Hobbesian war in anarchy is indeed inevitable in most realistic circumstances. You have the same propension, that I have, in favor of what is contiguous above what is remote. You are, therefore, naturally carry'd to commit acts of injustice as well as I. Your example both pushes me forward in this way by imitation, and also affords me a new reason for any breach of equity, by showing me, that I shou'd be the cully of my integrity, if I alone shou'd impose on myself a severe restraint amidst the licentiousness of others. (David Hume, A Treatise of Human Nature) (Published Online July 11 2006) Footnotes1 Thanks to Luc Bovens, Sharon Lloyd, Brian Skyrms, Susanne Sreedhar and an anonymous referee for many helpful comments of early versions of this essay. (shrink)
We introduce a dynamic model for evolutionary games played on a network where strategy changes are correlated according to degree of influence between players. Unlike the notion of stochastic stability, which assumes mutations are stochastically independent and identically distributed, our framework allows for the possibility that agents correlate their strategies with the strategies of those they trust, or those who have influence over them. We show that the dynamical properties of evolutionary games, where such influence neighborhoods appear, differ dramatically from (...) those where all mutations are stochastically independent, and establish some elementary convergence results relevant for the evolution of social institutions. (shrink)
In their classic analyses, Hobbes and Hume argue that offensively violating a covenant is irrational because the offense ruins one’s reputation. This paper explores conditions under which reputation alone can enforce covenants. The members of a community are modeled as interacting in a Covenant Game repeated over time. Folk theorems are presented that give conditions under which the Humean strategy of performing in covenants only with those who have never offensively violated or performed with an offensive violator characterizes an equilibrium (...) of the repeated Covenant Game. These folk theorems establish that for certain ideal settings Hobbes’ and Hume’s arguments against offensively violating covenants are compelling. However, these ideal settings presuppose that the community has certain mechanisms that generate common knowledge of the identities of those with whom one should perform. I analyze the results of computer simulations of the interactions in a community whose members must rely upon private communication alone. The computer simulation data show that in this community, reputation effects cannot effectively deter members from offensively violating covenants. I conclude that Hobbes’ and Hume’s warnings against offensive violation are compelling only on condition that the community is sufficiently structured to generate common knowledge among its members. I also conclude that even in such structured communities, the Humean strategy is not the uniquely “correct” policy. (shrink)
Vanderschraaf develops a new theory of game theory equilibrium selection in this book. The new theory defends general correlated equilibrium concepts and suggests a new analysis of convention.
Since at least as long ago as Plato’s time, philosophers have considered the possibility that justice is at bottom a system of rules that members of society follow for mutual advantage. Some maintain that justice as mutual advantage is a fatally flawed theory of justice because it is too exclusive. Proponents of a Vulnerability Objection argue that justice as mutual advantage would deny the most vulnerable members of society any of the protections and other benefits of justice. I argue that (...) the Vulnerability Objection presupposes that in a justice-as-mutual-advantage society only those who can and do contribute to the cooperative surplus of benefits that compliance with justice creates are owed any share of these benefits. I argue that justice as mutual advantage need not include such a Contribution Requirement. I show by example that a justice-as-mutual-advantage society can extend the benefits of justice to all its members, including the vulnerable who cannot contribute. I close by arguing that if one does not presuppose a Contribution Requirement, then a justice-as-mutual-advantage society might require its members to extend the benefits of justice to humans that some maintain are not persons (for example, embryos) and to certain nonhuman creatures. I conclude that the real problem for defenders of justice as mutual advantage is that this theory of justice threatens to be too inclusive. (shrink)
I explore the evolution of strategies in an Augmented Stag Hunt game that adds a punishing strategy to the ordinary Stag Hunt strategies of cooperating, which aims for optimality, and defecting, which “plays it safe.” Cooperating weakly dominates punishing and defecting is the unique evolutionarily stable strategy. Nevertheless, for a wide class of Augmented Stag Hunts, polymorphic strategies combining punishing and cooperating collectively have greater attracting power for replicator dynamics than that of the ESS. The analysis here lends theoretical support (...) to the altruistic punishment hypothesis in the social sciences. (shrink)
Hume is rightly credited with giving a brilliant, and perhaps the best, account of justice as convention. Hume's importance as a forerunner of modern economics has also long been recognized. However, most of Hume's readers have not fully appreciated how closely Hume's analysis of convention foreshadows a particular branch of economic theory, namely, game theory. Starting with the work of Barry, Runciman and Sen and Lewis, there has been a flowering of literature on the informal game-theoretic insights to be found (...) in classics of political philosophy such as Hobbes, Locke, Hume and Rousseau. A number of authors in this tradition, including Lewis, Gauthier, Mackie, and Postema, have identified passages in Hume which they interpret as giving informal examples of specific games. Yet, unlike his predecessors, Hobbes and Locke, Hume does much more than present examples which have a game-theoretic structure. In his account of convention, Hume gives general conditions which characterize the resolution of social interaction problems, and in the examples he uses to illustrate this account, Hume outlines several different methods by which agents can arrive at such a resolution. Hume's general account of convention and his explanations of the origins of particular conventions together constitute a theory of strategic interaction, which is precisely what game theory aspires to be. (shrink)
We introduce a dynamic model for evolutionary games played on a network where strategy changes are correlated according to degree of influence between players. Unlike the notion of stochastic stability, which assumes mutations are stochastically independent and identically distributed, our framework allows for the possibility that agents correlate their strategies with the strategies of those they trust, or those who have influence over them. We show that the dynamical properties of evolutionary games, where such influence neighborhoods appear, differ dramatically from (...) those where all mutations are stochastically independent, and establish some elementary convergence results relevant for the evolution of social institutions. (shrink)
I review the classic skeptical challenges of Foole in Leviathan and the Lydian Shepherd in Republic against the prudential rationality of justice. Attempts to meet these challenges contribute to the reconciliation project (Kavka in Hobbesian moral and political theory , 1986 ) that tries to establish that morality is compatible with rational prudence. I present a new Invisible Foole challenge against the prudential rationality of justice. Like the Lydian Shepherd, the Invisible Foole can violate justice offensively (Kavka, Hobbesian moral and (...) political theory , 1986 ; Law and Philosophy , 14:5–34, 1995 ) without harming his reputation for justice. And like the Foole, the Invisible Foole dismisses the possibility that being just preserves goods intrinsic to justice, and will be just only if he fears that others will punish his injustice by withholding the external goods like labor and material goods that he would otherwise receive for their performance in covenants. I argue that given a plausible folk-theorem interpretation , Hobbes’ response to the Foole’s challenge is inconclusive, and depends crucially upon common knowledge assumptions that may or may not obtain in actual societies. I present two analogous folk-theorem arguments in response to the Invisible Foole’s challenge, one using the idea that the Invisible Foole’s power of concealment might be transitory, and the other using the idea that members of society might stop performing in covenants with anyone, thus punishing the Invisible Foole indirectly, if the Invisible Foole commits sufficiently many injustices. (shrink)
In this article, I analyze the circumstances of justice, that is, the background conditions that are necessary and sufficient for justice to exist between individual parties in society. Contemporary political philosophers almost unanimously accept an account of these circumstances attributed to David Hume. I argue that the conditions of this standard account are neither sufficient nor necessary conditions for justice. In particular, I contend that both a Hobbesian state of nature and a prisoners dilemma are cases in which the conditions (...) of the standard account obtain and yet no justice exists between parties. I propose an alternative set of generic circumstances of justice motivated by examples from game theory. Parties are in these generic circumstances with respect to each other when: (1) they are engaged in a conflictual coordination game with multiple strict Nash equilibrium points where, at any of these equilibria, some parties do not receive their greatest payoffs, and (2) they have common knowledge that each party is rational and follows her end of a strict equilibrium where no party receives her greatest payoff. These two conditions reflect the idea that justice requires all parties to make some sacrifices so that others can have more of the goods they need and want. I argue that these generic circumstances are necessary and sufficient conditions for parties to follow generic norms of justice, that is, mutually beneficial practices that require some sacrifices. Key Words: common knowledge conflictual coordination correlated equilibrium moderate selfishness moderate variable scarcity rough equality. (shrink)
In recent years, a number of authors have used gametheoretic reasoning to explain why purely self-interested agentswould ever conform their economic activities with the requirements of justice, when by doing so they forego opportunities to reapunilateral net gains by exploiting others. In this paper, I argue that Hume's justification of honest economic exchanges between self-interested agents in the Treatise foreshadows this contemporary literature. Hume analyzes the problem of explaining justice in self-interested economic exchange as a problem of agents coordinating on (...) a pattern of reciprocal cooperation, as opposed to some other behavioral pattern such as reciprocal exploitation, in exchanges repeated over time. Hume's arguments anticipate informally thecontemporary interpretation of just economic practices as forming part of an equilibrium of a repeated game. I close the paper by arguing that Hume does not provide a satisfactory explanation of how the mutual expectations that support justice in economic exchange arise in a community of self-interested agents. The problem Hume leaves unsolved is one of equilibrium selection, that is: Why do agents follow an equilibrium corresponding to just economic exchanges rather than some other equilibrium corresponding to unjust exchanges? I also argue that contemporary game theory still lacks a satisfactory theory of equilibrium selection, but that such a theory would lead us closer to a satisfactory Humean reconciliation of justice and self-interest in economic exchange. (shrink)
I resolve a previously unnoticed anomaly in the analysis of collective action problems. Some political theorists apply game theory to analyze the paradox of anarchy: War is apparently inevitable in anarchy even though all warring parties prefer peace over war. Others apply tipping threshold analysis to resolve the paradox of revolution: Joining a revolution is apparently always irrational even when an overwhelming majority of the population wish to replace their regime. The usual game theoretic analysis of anarchy yields the conclusion (...) that the suboptimal equilibrium of war is inevitable. The usual tipping threshold analysis of revolution yields the conclusion that the optimal equilibrium of successful revolution is possible. Yet structurally the collective action problems of anarchy and potential revolution are much the same. This suggests that tipping threshold analysis and game theory are incompatible methodologies, despite their widespread use in the social sciences. I argue that there is no real tension between game theory and tipping threshold analysis, even though these methodologies have developed largely independently of each other. I propose a Variable Belief Threshold model of collective action that combines elements of game theory and tipping threshold analysis. I show by example that one can use this kind of hybrid model to give compatible explanations of conflict in anarchy and successful revolution. Introduction Two Classic Problems, and Two Popular Analyses 2.1 The paradox of anarchy 2.2 The paradox of revolution Restating the Puzzle Evaluating the Prisoners’ Dilemma and S -Curve Models The Variable Belief Threshold Model Example 5.1. A population of moderates with independent deviations Example 5.2 A heterogeneous population with independent deviations Example 5.3 A heterogeneous population with coordinated deviations Conclusion Appendix: Computer Simulations. (shrink)
Learning to take turns in repeated game situations is a robust phenomenon in both laboratory experiments and in everyday life. Nevertheless, it has received little attention in recent studies of learning dynamics in games. We investigate the simplest and most obvious extension of fictitious play to a learning rule that can recognize patterns, and show how players using this rule can spontaneously learn to take turns.
The traditional solution concept for noncooperative game theory is the Nash equilibrium, which contains an implicit assumption that players’ probability distributions satisfy t probabilistic independence. However, in games with more than two players, relaxing this assumption results in a more general equilibrium concept based on joint beliefs. This article explores the implications of this joint-beliefs equilibrium concept for two kinds of conflictual coordination games: crisis bargaining and public goods provision. We find that, using updating consistent with Bayes’ rule, players’ beliefs (...) converge to equilibria in joint beliefs which do not satisfy probabilistic independence. In addition, joint beliefs greatly expand the set of mixed equilibria. On the face of it, allowing for joint beliefs might be expected to increase the prospects for coordination. However, we show that if players use joint beliefs, which may be more likely as the number of players increases, then the prospects for coordination in these games declines vis-à-vis independent beliefs. (shrink)
In this dissertation, I develop a theory of rational inductive deliberation in the context of strategic interaction that generalizes previous theories of inductive deliberation. In this account of inductive deliberation, I model rational deliberators as players engaged in noncooperative games, such that: They are Bayesian rational, in the sense that every deliberator chooses actions that maximize expected utility given the beliefs this deliberator has regarding the other deliberators, and They update their beliefs about one another recursively, using rules of inductive (...) logic. Inductive deliberators update their beliefs until they reach an equilibrium of the game, at which every deliberator maximizes expected utility given the deliberator's beliefs over the actions of the other deliberators. ;The theory of inductive deliberation generalizes previous theories by allowing for the possibility of correlation in the beliefs of the deliberators. Most of the results of noncooperative game theory presuppose that the players strategies are probabilistically independent. I argue that such probabilistic independence assumptions are unfounded, and that agents should take into account the possibility that their opponents' actions are correlated. Relaxing the probabilistic independence assumption in noncooperative game theory leads to various correlated equilibrium concepts, which I argue are the appropriate solution concepts for noncooperative games. Relaxing the independence assumption in the inductive dynamics enables correlation in the deliberators' beliefs to emerge spontaneously, resulting in the deliberators converging to correlated equilibrium. ;I devote the majority of the dissertation to the formal theory of inductive deliberation and correlated equilibrium. In particular, I show the following: Under suitable conditions, correlated equilibria correspond to fixed points of the dynamics, and The dynamics can create correlation in beliefs from an initial uncorrelated state. In the final chapter of the dissertation, I give an account of the origins of social conventions such as the use of particular words in human languages as the result of inductive deliberation. (shrink)