Evolutionary applications of game theory present one of the most pedagogically accessible varieties of genuine, contemporary theoretical biology. We present here Oyun (OY-oon, http://charlespence.net/oyun), a program designed to run iterated prisoner’s dilemma tournaments, competitions between prisoner’s dilemma strategies developed by the students themselves. Using this software, students are able to readily design and tweak their own strategies, and to see how they fare both in round-robin tournaments and in “evolutionary” tournaments, where the scores in a given “generation” directly (...) determine contribution to the population in the next generation. Oyun is freely available, runs on Windows, Mac, and Linux computers, and the process of creating new prisoner’s dilemma strategies is both easy to teach and easy for students to grasp. We illustrate with two interesting examples taken from actual use of Oyun in the classroom. (shrink)
According to the so-called “Folk Theorem” for repeated games, stable cooperative relations can be sustained in a Prisoner’s Dilemma if the game is repeated an indefinite number of times. This result depends on the possibility of applying strategies that are based on reciprocity, i.e., strategies that reward cooperation with subsequent cooperation and punish defectionwith subsequent defection. If future interactions are sufficiently important, i.e., if the discount rate is relatively small, each agent may be motivated to cooperate by fear of (...) retaliation in the future. For finite games, however, where the number of plays is known beforehand, there is a backward induction argument showing that rational agents will not be able to achieve cooperation. On behalf of the Hobbesian “Foole”, who cannot see any advantage in cooperation, Gregory Kavka (1983, 1986) has presented an argument that significantly extends the range of the backward induction argument. He shows that, for the backward induction argument to be effective, it is not necessary that the precise number of future interactions be known. It is sufficient that there is a known definite upper bound on the number of interactions. A similar argument is developed by John W. Carroll (1987). We will here question the assumption of a known upper bound. When the assumption is made precise in the way needed for the argument to go through, its apparent plausibility evaporates. We then offer a reformulation of the argument, based on weaker, and more plausible, assumptions. (shrink)
For the tradition, an action is rational if maximizing; for Gauthier, if expressive of a disposition it maximized to adopt; for me, if maximizing on rational preferences, ones whose possession maximizes given one's prior preferences. Decision and Game Theory and their recommendations for choice need revamping to reflect this new standard for the rationality of preferences and choices. It would not be rational when facing a Prisoner'sDilemma to adopt or co-operate from Amartya Sen's "Assurance Game" or "Other (...) Regarding" preferences. But there are preferences which it maximizes to adopt and co-operate from. (shrink)
Teaching economics has been shown to encourage students to defect in a prisoner'sdilemma game. However, can ethics training reverse that effect and promote cooperation? We conducted an experiment to answer this question. We found that students who had the ethics module had higher rates of cooperation than students without the ethics module, even after controlling for communication and other factors expected to affect cooperation. We conclude that the teaching of ethics can mitigate the possible adverse incentives of (...) the prisoner'sdilemma, and, by implication, the adverse effects of economics and business training. (shrink)
The so-called "Prisoner''s Dilemma" is often referred to in business ethics, but probably not well understood. This article has three parts: (1) I claim that models derived from game theory are significant in the field for discussions of prudential ethics and the practical decisions managers make; (2) I discuss using them as a practical pedagogical exercise and some of the lessons generated; (3) more speculatively, I suggest that they are useful in discussions of corporate personhood.
Rachlin basically marshals three reasons behind his unconventional claim that altruism is a subcategory of self-control and that, hence, the prisoner'sdilemma is the appropriate metaphor of altruism. I do not find any of the three reasons convincing. Therefore, the prisoner'sdilemma metaphor is unsuitable for explaining altruism.
I first argue against Peter Singer's exciting thesis that the Prisoner'sDilemma explains why there could be an evolutionary advantage in making reciprocal exchanges that are ultimately motivated by genuine altruism over making such exchanges on the basis of enlightened long-term self-interest. I then show that an alternative to Singer's thesis — one that is also meant to corroborate the view that natural selection favors genuine altruism, recently defended by Gregory Kavka, fails as well. Finally, I show that (...) even granting Singer's and Kavka's claim about the selective advantage of altruism proper, it is doubtful whether that type of claim can be used in a particular sort of sociobiological argument against psychological egoism. (shrink)
Many recent studies of norm emergence employ the "prisoner'sdilemma" (PD) paradigm, which focuses on the free-rider problem that can block the cooperation required for the emergence of social norms. This paper proposes an expansion of the PD paradigm to include a closely related game termed the "altruist's dilemma" (AD). Whereas egoistic behavior in the PD leads to collectively irrational outcomes, the opposite is the case in the AD: altruistic behavior (e.g., following the Golden Rule) leads to (...) collectively irrational outcomes, whereas egoistic behavior leads to Pareto-optimal outcomes. The analysis shows that PDs can be converted into ADs either by increasing cooperation costs or by diminishing marginal gains from cooperation; therefore ADs are as empirically abundant as PDs. In addition, the analysis shows that altruists are not the only type of actors who fall prey to the AD; egoists can fall into this trap as well if they possess a capacity for interpersonal control. Where group solidarity is defined analytically in terms of the extent of cooperation in both PDs and ADs, this paper presents a model based on rational choice to account for variations in solidarity. According to the proposed analysis, levels of group solidarity depend on the balance in the group between compliant control, which increases cooperation, and oppositional control, which reduces it. That balance, in turn, depends on the allocation of power within the group. (shrink)
The "Prisoner'sDilemma" game has been extensively discussed in both the public and academic press. Thousands of articles and many books have been written about this disturbing game and its apparent representation of many problems of society. The origin of the game is attributed to Merrill Flood and Melvin Dresher. I quote from the Stanford Encyclopedia of Philosophy: Puzzles with this structure were devised and discussed by Merrill Flood and Melvin Dresher in 1950, as part of the Rand (...) CorporationÂ’s investigations into game theory (which Rand pursued because of possible applications to global nuclear strategy). The title "prisonerÂ’s dilemma" and the version with prison sentences as payoffs are due to Albert Tucker, who wanted to make Flood and DresherÂ’s ideas more accessible to an audience of Stanford psychologists. The Prisoner'sDilemma is a short parable about two prisoners who are individually offered a chance to rat on each other for which the "ratter" would receive a lighter sentence and the "rattee" would receive a harsher sentence. The problem results from the fact that both can play this game -- that is, defect -- and if both do, then both do worse than they would had they both kept silent. This peculiar parable serves as a model of cooperation between two or more individuals (or corporations or countries) in ordinary life in that in many cases each individual would be personally better off not cooperating (defecting) on the other. (shrink)
The iterated prisoner’s dilemma (IPD) has been widely used in the biological and social sciences to model dyadic cooperation. While most of this work has focused on the discrete prisoner’s dilemma, in which actors choose between cooperation and defection, there has been some analysis of the continuous IPD, in which actors can choose any level of cooperation from zero to one. Here, we analyse a model of the continuous IPD with a limited strategy set, and show that a (...) generous strategy achieves the maximum possible payoff against its own type. While this strategy is stable in a neighborhood of the equilibrium point, the equilibrium point itself is always vulnerable to invasion by uncooperative strategies, and hence subject to eventual destabilization. The presence of noise or errors has no effect on this result. Instead, generosity is favored because of its role in increasing contributions to the most efficient level, rather than in counteracting the corrosiveness of noise. Computer simulation using a single-locus infinite alleles Gaussian mutation model suggest that outcomes ranging from a stable cooperative polymorphism to complete collapse of cooperation are possible depending on the magnitude of the mutational variance. Also, making the cost of helping a convex function of the amount of help provided makes it more difficult for cooperative strategies to invade a non-cooperative equilibrium, and for the cooperative equilibrium to resist destabilization by noncooperative strategies. (shrink)
The Prisoner'sDilemma (PD) exhibits a tragedy in this sense: if the players are fully informed and rational, they are condemned to a jointly dispreferred outcome. In this essay I address the following question: What feature of the PD's payoff structure is necessary and sufficient to produce the tragedy? In answering it I use the notion of a trembling-hand equilibrium. In the final section I discuss an implication of my argument, an implication which bears on the persistence of (...) the problem posed by the PD. (shrink)
A version of this paper was presented at the IEEE International Conference on Computational Intelligence, combined meeting of ICNN, FUZZ-IEEE, and ICEC, Orlando, June-July, 1994, and an earlier form of the result is to appear as "The Undecidability of the Spatialized Prisoner'sDilemma" in Theory and Decision . An interactive form of the paper, in which figures are called up as evolving arrays of cellular automata, is available on DOS disk as Research Report #94-04i . An expanded version (...) appears as chapter 6 of The Philosophical Computer. (shrink)
The paper is essentially a short version Spohn "Strategic Rationality" which emphasizes in particular how the ideas developed there may be used to shed new light on the iterated prisoner'sdilemma (and on iterated Newcomb's problem).
Hamilton games-theoretic conflict model, which applies Maynard Smith's concept of evolutionarily stable strategy to the Prisoner'sDilemma, gives rise to an inconsistency between theoretical prescription and empirical results. Proposed resolutions of thisproblem are incongruent with the tenets of the models involved. The independent consistency of each model is restored, and the anomaly thereby circumvented, by a proof that no evolutionarily stable strategy exists in the Prisoner'sDilemma.
– We present a new paradigm extending the Iterated Prisoner'sDilemma to multiple players. Our model is unique in granting players information about past interactions between all pairs of players – allowing for much more sophisticated social behaviour. We provide an overview of preliminary results and discuss the implications in terms of the evolutionary dynamics of strategies.
Experiments in which subjects play simultaneously several finite two-person prisoner'sdilemma supergames with and without an outside option reveal that: (i) an attractive outside option enhances cooperation in the prisoner'sdilemma game, (ii) if the payoff for mutual defection is negative, subjects' tendency to avoid losses leads them to cooperate; while this tendency makes them stick to mutual defection if its payoff is positive, (iii) subjects use probabilistic start and endeffect behavior.
In the spatialized Prisoner'sDilemma, players compete against their immediate neighbors and adopt a neighbor's strategy should it prove locally superior. Fields of strategies evolve in the manner of cellular automata (Nowak and May, 1993; Mar and St. Denis, 1993a,b; Grim 1995, 1996). Often a question arises as to what the eventual outcome of an initial spatial configuration of strategies will be: Will a single strategy prove triumphant in the sense of progressively conquering more and more territory without (...) opposition, or will an equilibrium of some small number of strategies emerge? Here it is shown, for finite configurations of Prisoner'sDilemma strategies embedded in a given infinite background, that such questions are formally undecidable: there is no algorithm or effective procedure which, given a specification of a finite configuration, will in all cases tell us whether that configuration will or will not result in progressive conquest by a single strategy when embedded in the given field. The proof introduces undecidability into decision theory in three steps: by (1) outlining a class of abstract machines with familiar undecidability results, by (2) modelling these machines within a particular family of cellular automata, carrying over undecidability results for these, and finally by (3) showing that spatial configurations of Prisoner'sDilemma strategies will take the form of such cellular automata. (shrink)
Revisiting Lacan's discussion of the puzzle of the prisoner'sdilemma provides a means of elaborating a theory of the trans-subjective. An illustration of this dilemma provides the basis for two important arguments. Firstly, that we need to grasp a logical succession of modes of subjectivity: from subjectivity to inter-subjectivity, and from inter-subjectivity to a form of trans-subjective social logic. The trans-subjective, thus conceptualized, enables forms of social objectivity that transcend the level of (inter)subjectivity, and which play a (...) crucial role in consolidating given societal groupings. The paper advances, secondly, that various declarative and symbolic activities are important non-psychological bases—trans-subjective foundations—for psychological identifications of an inter-subjective sort. These assertions link interesting to recent developments in the contemporary social psychology of interobjectivity, which likewise emphasize a type of objectivity that plays an indispensible part in co-ordinating human relations and understanding. (shrink)
The Prisoner’s Dilemma is a popular device used by researchers to analyze such institutions as business and the modem corporation. This popularity is not deserved under a certain condition that is widespread in college education. If we, as management educators, take seriouslyour parts in preparing our students to participate in the institutions of a democratic society, then the Prisoner’s Dilemma-as clever a rhetoricaldevice as it is-is an unacceptable means to that end. By posing certain questions about the prisoners (...) in the Prisoner’s Dilemma, I show that management educators have created a Prisoners Dilemma, whereby they intellectually imprison themselves and their students by continuingto appeal to the Prisoner’s Dilemma. These questions are not encouraged by the advocates of the Prisoner’s Dilemma. (shrink)
The results of a series of computer simulations demonstrate how the introduction of separate spatial dimensions for agent interaction and learning respectively affects the possibility of cooperation evolving in the repeated prisoner'sdilemma played by populations of boundedly-rational agents. In particular, the localisation of learning promotes the emergence of cooperative behaviour, while the localisation of interaction has an ambiguous effect on it.
I distinguish and review six major attempts to give a Co-operative solution to the Prisoners Dilemma: Symmetry, Mechanism, Inducement, Resolution, Alternative Principle, and Preference-Revision. I then detail and criticize those of Ned McClennen (Resolution/possibly Preference-Revision)and David Gauthier (Alternative Principle). I conclude with some observations about what the failure of their solutions shows must be the parameters of any correct Co-operative solution: Rational agents should adopt maximizing dispositions, i.e., ones which will induce them to Co-operate with just those similarly disposed, (...) but adopting the dispositions must consist in adopting revised preferences, ones favoring Co-operating under certain conditions, and able to rationalize it as straightforwardly maximizing on the new preferences. Co-operation would then be rationalized by the new preferences. Still, we will not know exactly which preference-function PD agents should adopt, only that it must maximize to adopt it, and maximize to Co-operate from it with similar agents. The details are complicated, and must await further study. (shrink)
The purpose of this research is to examine the impact of individual and firm moral philosophies on marketing exchange relationships. Personal moral philosophies range from the extreme forms of true altruists and true egoists, along with three hybrids that represent middle ground (i.e., realistic altruists, tit-for-tats, and realistic egoists). Organizational postures are defined as Ethical Paradigm, Unethical Paradigm, and Neutral Paradigm, which result in changes to personal moral philosophies and company and industry performance. The study context is a simulation of (...) an exchange environment using a variation of the prisoners' dilemma game. A literature review is provided in the opening section, followed by details on the simulation, discussion of the results, and the implications for theory and practice. (shrink)
Gauthier's argument for constrained maximization, presented inMorals by Agreement, is perfected by taking into account the possibility of accidental exploitation and discussing the limitations on the values of the parameters which measure the translucency of the actors. Gauthier's argument is nevertheless shown to be defective concerning the rationality of constrained maximization as a strategic choice. It can be argued that it applies only to a single actor entering a population of individuals who are themselves not rational actors but simple rule-followers. (...) A proper analysis of the strategic choice situation involving two rational actors who confront each other shows that constrained maximization as the choice of both actors can only result under very demanding assumptions. (shrink)
Mankind soon learn to make interested uses of every right and power which they possess, or may assume. The public money and public liberty...will soon be discovered to be sources of wealth and dominion to those who hold them; distinguished, too, by this tempting circumstance, that they are the instrument, as well as the object of acquisition. With money we will get men, said Caesar, and with men we will get money. Nor should our assembly be deluded by the integrity (...) of their own purposes, and conclude that these unlimited powers will never be abused, because themselves are not disposed to abuse them. They should look forward to a time, and that not a distant one, when a corruption in this, as in the country from which we derive our origin, will have seized the heads of government, and be spread by them through the body of the people; when they will purchase the voices of the people, and make them pay the price. (shrink)
To show it is sometimes rational to cooperate in the Prisoner'sDilemma, David Gauthier has claimed that if it is rational to form an intention then it is sometimes rational act on it. However, the Paradox of Deterrence and the Toxin Puzzle seem to put this general type of claim into doubt. For even if it is rational to form a deterrent intention, it is not rational act on it (if it is not successful); and even if it (...) is rational to form an intention to drink a toxin, it is not rational to act on it (come the time for drinking). This article employs an extended version of Michael Bratman's theory of intention to show how to argue systematically that it can be rational to act on rationally formed cooperative intentions, while not being committed to the rationality of apocalyptic retaliation, or pointless toxin drinking. (shrink)
The Prisoner’s Dilemma (PD) is widely used to model social interaction between unrelated individuals in the study of the evolution of cooperative behaviour in humans and other species. Many effective mechanisms and promotive scenarios have been studied which allow for small founding groups of cooperative individuals to prevail even when all social interaction is characterised as a PD. Here, a brief critical discussion of the role of the PD as the most prominent tool in cooperation research is presented, followed (...) by two new objections to such an exclusive focus on PD-based models of social interaction. It is highlighted that only 2 of the 726 combinatorially possible strategically unique ordinal 2x2 games have the detrimental characteristics of a PD and that the frequency of PD-type games in a space of games with random payoffs does not exceed about 3.5%. Although these purely mathematical considerations do not compellingly imply that the relevance of PDs is overestimated, it is proposed that, in the absence of convergent empirical information about the ancestral human social niche, this finding can be interpreted in favour of a so far rather neglected answer to the question of how the founding groups of human cooperation themselves came to cooperate: Behavioural and/or psychological mechanisms which evolved for other, possibly more frequent, social interaction situations might have been applied to PD-type dilemmas only later. Human cooperative behaviour might thus partly have begun as a cooptation. (shrink)
The problems that I address concern the morality and rationality of decisions with respect to the application and practice of science. Formally, the situation is a standard decision theoretic one in which one has a set of alternatives and a set of outcomes. The standard solution is to maximize expected utility. This formal simplicity conceals considerable philosophical complexity. The most obvious is — whose expected utility should we maximize? The second is — are there any moral constraints on what utility (...) assignments we shall allow? The principle of rationality I am assuming is that a rational decision should be based on the total information available. Failure to cooperate in effecting such an amalgamation is subversive with respect to this overriding principle of rationality. It is a fundamental principle of truth seeking. Given the prima facie moral obligation to seek truth, failure to cooperate is prima facie immoral as well. (shrink)
This chapter argues that, under certain conditions, forming an intention makes an action rational which would otherwise not have been rational, since intentions (together with beliefs) in and of themselves provide deductive reasons for further intentions and actions, an argument which builds on previous work by R M Hare, Michael Bratman and others, It also provides an articulation and defense of the concept of "minimally constrained maximization" as a unified general solution to the well-known paradoxes of rationality, including the paradox (...) of deterrence and the prisoner'sdilemma. (shrink)
David Gauthier has argued that, under certain conditions, cooperation in the Prisoner'sDilemma is rational. A crucial principle he employs in this argument, however, also implies the pointless retaliation after a failed threat could also be rational. In this paper, I introduce one possible reformulation of the Cooperation Argument, by replacing its second premise with a principle connecting rationally adopted intentions, rational action, and rational reconsideration, and a specific theory of rational reconsideration. I then argue that this reformulated (...) Cooperation Argument is not susceptible to any form of the Deterrence Objection, and conclude that the Deterrence Objection may be circumvented if proper attention is paid to the role of rational reconsideration. (shrink)
Prisoner's dilemmas can lead rational people to interact in ways that lead to persistent inefficiencies. These dilemmas create a problem for institutional designers to solve: devise institutions that realign individual incentives to achieve collectively rational outcomes. I will argue that we do not always want to eliminate misalignments between individual incentives and efficient outcomes. Sometimes we want to preserve prisoner's dilemmas, even when we know that they systematically will lead to inefficiencies. No doubt, prisoner's dilemmas can create (...) problems, but they also create opportunities to practice the cooperative norms that make market institutions possible in the first place. An ethical market culture, I argue, benefits from the presence of prisoner's dilemmas. I first consider standard approaches for solving prisoner's dilemmas. I then argue for the value of prisoner's dilemmas. Finally, I show the significance of this argument for advocating codes of business ethics. (shrink)
Jan Österberg (Self and Others, 1988) argues that the most defensible form of egoism should not only tell each of us what to do but also tell us what we ought to do. He also claims that collective norms should take precedence over individual ones. An individual ought to do one's part in an action pattern that is prescribed for the group - provided that other members of the group do their part. question This paper questions Österberg's claim that Collective (...) Egoism, unlike other forms of egoism, avoids violations of the principles which he takes to be analytical adequacy criteria for ethical theories: the principles of "deontic consequence" and "joint satisfiability". Furthermore, it questions his argument that Collective Egoism yields the "right" prescriptions in its main test-case: Prisoners' Dilemma. The improved version of Collective Egoism is able to deal with the two-person Prisoners' Dilemma, but it still misbehaves when we move to the many-persons cases. A certain type of "free rider"-problems proves to be especially troublesome. (shrink)
Duncan MacIntosh has argued that David Gauthier's notion of a constrained maximization disposition faces a dilemma. For if such a disposition is revocable, it is no longer rational come the time to act on it, and so acting on it is not (as Gauthier argues) rational; but if it is not revocable, acting on it is not voluntary. This paper is a response to MacIntosh's dilemma. I introduce an account of rational intention of a type which has become (...) increasingly and independently prominent in the literature, and argue that, on this account, rational and voluntary constraint is possible. (shrink)
As levels of trust decrease and the necessity for trust increase in our society, we are increasingly driven toward the untoward, even disastrous, outcomes of the prisoner'sdilemma. Yet despite the growing evidence that (re)building conditions of trust is increasingly mandatory in our era, modern moral philosophy (by default) and the social sciences (implicitly) legitimize an instrumental rationality which is the root problem. The greatest danger is that as conditions of trust are rationalized away through the progressive institutionalization (...) of an instrumental rationality, we are driven towards the most virulent form of the prisoner's paradox — ethical relativism and its nihilistic consequences. (shrink)
This collection focuses on questions that arise when morality is considered from the perspective of recent work on rational choice and evolution. Linking questions like "Is it rational to be moral?" to the evolution of cooperation in "The Prisoners Dilemma," the book brings together new work using models from game theory, evolutionary biology, and cognitive science, as well as from philosophical analysis. Among the contributors are leading figures in these fields, including David Gauthier, Paul M. Churchland, Brian Skyrms, Ronald (...) de Sousa, and Elliot Sober. (shrink)
In their attempt to provide a reason to be moral, contractarians such as David Gauthier are concerned with situations allowing a group of agents the chance of mutual benefit, so long as at least some of them are prepared to constrain their maximising behaviour. But what justifies this constraint? Gauthier argues that it could be rational (because maximising) to intend to constrain one's behaviour, and in certain circumstances to act on this intention. The purpose of this paper is to examine (...) the conditions under which it is rational to form, and to act on, intentions. I introduce and examine in detail what Gauthier has to say on these issues, argue that it suffers from various problems, and propose an alternative account which I claim avoids them. (shrink)
This paper is an encyclopedia entry on the political philosophy of libertarianism, written for the Internet Encyclopedia of Philosophy. It discusses the major contemporary strands of libertarianism and their historical roots, and presents some of the main criticisms of these strands. Its focus is on libertarianism as a doctrine about distributive justice and political authority, and specifically on the consequentialist and natural rights formulations of these views.
This paper offers a novel ‘changing places’ account of identification in games, where the consequences of role swapping are crucial. First, it illustrates how such an account is consistent with the view, in classical game theory, that only outcomes (and not pathways) are significant. Second, it argues that this account is superior to the ‘pooled resources’ alternative when it comes to dealing with some situations in which many players identify. Third, it shows how such a ‘changing places’ account can be (...) used in games where some of the players identify with one another, but others do not. Finally, it illustrates how the model can handle the notion that identification comes in degrees. (shrink)
Decision theory explains weakness of will as the result of a conflict of incentives between different transient agents. In this framework, self-control can only be achieved by the I-now altering the incentives or choice-sets of future selves. There is no role for an extended agency over time. However, it is possible to extend game theory to allow multiple levels of agency. At the inter-personal level, theories of team reasoning allow teams to be agents, as well as individuals. I apply team (...) reasoning at the intra-personal level, taking the self as a team of transient agents over time. This allows agents to ask, not just “what should I-now do?’, but also ‘What should I, the person over time do?’, which may enable agents to achieve self-control. The resulting account is Aristotelian in flavour, as it involves reasoning schemata and perception, and it is compatible with some of the psychological findings about self-control. (shrink)
The traditional form of the backward induction argument, which concludes that two initially rational agents would always defect, relies on the assumption that they believe they will be rational in later rounds. Philip Pettit and Robert Sugden have argued, however, that this assumption is unjustified. The purpose of this paper is to reconstruct the argument without using this assumption. The formulation offered concludes that two initially rational agents would decide to always defect, and relies only on the weaker assumption that (...) they do not believe they will not be rational in later rounds. The argument employs the idea that decisions justify revocable presumptions about behaviour. (shrink)
I have maintained that some but not all prisoners' dilemmas are side-by-side Necomb problems. The present paper argues that, similarly, some but not all versions of Newcomb's Problem are prisoners' dilemmas in which Taking Two and Predicting Two make an equilibrium that is dispreferred by both the box-chooser and predictor to the outcome in which only one box is taken and this is predicted. I comment on what kinds of prisoner's dilemmas Newcomb's Problem can be, and on opportunities that (...) results reached may open for kinds of cooperative reasoning in versions of Newcomb's Problem. (shrink)
David Gauthier thinks agents facing a prisoner'sdilemma ('pd') should find it rational to dispose themselves to co-operate with those inclined to reciprocate (i.e., to acquire a constrained maximizer--'cm'--disposition), and to co-operate with other 'cmers'. Richmond Campbell argues that since dominance reasoning shows it remains to the agent's advantage to defect, his co-operation is only rational if cm "determines" him to co-operate, forcing him not to cheat. I argue that if cm "forces" the agent to co-operate, he is (...) not acting at all, never mind rationally. Thus, neither author has shown that co-operation is rational action in a pd. (shrink)
I argue that Gauthier's constrained-maximizer rationality is problematic. But standard Maximizing Rationality means one's preferences are only rational if it would not maximize on them to adopt new ones. In the Prisoner'sDilemma, it maximizes to adopt conditionally cooperative preferences. (These are detailed, with a view to avoiding problems of circularity of definition.) Morality then maximizes. I distinguish the roles played in rational choices and their bases by preferences, dispositions, moral and rational principles, the aim of rational action, (...) and rational decision rules. I argue that Maximizing Rationality necessarily structures conclusive reasons for action. Thus conations of any sort can base rational choices only if the conations are structured like a coherent preference function; rational actions maximize on such functions. Maximization-constraining dispositions cannot integrate into a coherent preference function. (shrink)
David Gauthier claims that it can be rational to co-operate in a prisoner'sdilemma if one has adopted a disposition constraining one's self from maximizing one's individual expected utility, i.e., a constrained maximizer disposition. But I claim cooperation cannot be both voluntary and constrained. In resolving this tension I ask what constrained maximizer dispositions might be. One possibility is that they are rationally acquired, irrevocable psychological mechanisms which determine but do not rationalize cooperation. Another possibility is that they (...) are rationally acquired preference-functions rationalizing cooperation as maximizing. I argue that if they are the first thing, then their adoption fails to make co-operation rational even if, as Gauthier also claims, actions are rational if they express rational dispositions. I then suggest that taking constrained maximizer dispositions to be things of the second sort would result in them being able to make co-operation rational, and that so-taking them therefore serves the bulk and spirit of Gauthier's larger claims, which I reconstruct accordingly. (shrink)