In a recent, thought-provoking paper Adam Elga ((2010) argues against unsharp – e.g., indeterminate, fuzzy and unreliable – probabilities. Rationality demands sharpness, he contends, and this means that decision theories like Levi's (1980, 1988), Gärdenfors and Sahlin's (1982), and Kyburg's (1983), though they employ different decision rules, face a common, and serious, problem. This article defends the rule to maximize minimum expected utility against Elga's objection.
Groups of people perform acts. For example, a committee passes a resolution, a team wins a game, and an orchestra performs a symphony. These collective acts may be evaluated for rationality. Take a committee’s passing a resolution. This act may be evaluated not only for fairness but also for rationality. Did it take account of all available information? Is the resolution consistent with the committee’s past resolutions? Standards of collective rationality apply to collective acts, that is, acts that groups of (...) people perform. What makes a collective act evaluable for rationality? What methods of evaluation apply to collective acts? This paper addresses these two questions. Collective rationality is rationality’s extension from individuals to groups. The paper’s first few sections review key points about rationality. They identify the features of an individual’s act that make it evaluable for rationality and distinguish rationality’s methods of evaluating acts directly and indirectly controlled. This preliminary work yields general principles of rationality for all agents, both individuals and groups. Applying the general principles to groups answers the paper’s two main questions about collective rationality. (shrink)
Does rational bargaining yield a social contract that is efficient and so inclusive? A core allocation, that is, an allocation that gives each coalition at least as much as it can get on its own, is efficient. However, some coalitional games lack a core allocation, so rationality does not require one in those games. Does rationality therefore permit exclusion from the social contract? I replace realization of a core allocation with another type of equilibrium achievable in every coalitional game. Fully (...) rational agents coordinate the pursuit of incentives so that equilibria of this type are efficient. They adopt a social contract that is efficient and inclusive. (shrink)
Agents face serious obstacles to making optimal decisions. For instance, their cognitive limits stand in the way. John Pollock’s book, Thinking about Acting , suggests many ways of revising decision principles to accommodate human limits and to direct limited, artificial agents. The book’s main proposal is to replace optimization, or expected-utility maximization, with locally global planning. This essay describes optimization and locally global planning, and then argues that optimization among salient options has the virtues of locally global planning without certain (...) drawbacks. Although it does not endorse locally global planning, it recommends that decision theory incorporate some of the book’s ideas about settling for improvements when optimization among all options is unrealistic. (shrink)
Standard principles of rational decision assume that an option's utility is both comprehensive and accessible. These features constrain interpretations of an option's utility. This essay presents a way of understanding utility and laws of utility. It explains the relation between an option's utility and its outcome's utility and argues that an option's utility is relative to a specification of the option. Utility's relativity explains how a decision problem's framing affects an option's utility and its rationality even for an agent who (...) is cognitively perfect and lacks only empirical information. The essay rewrites standard laws of utility to accommodate relativization to propositions' specifications. The new laws are generalizations of the standard laws and yield them as special cases. (shrink)
Abner Shimony (1988) argues that degrees of belief satisfy the axioms of probability because their epistemic goal is to match estimates of objective probabilities. Because the estimates obey the axioms of probability, degrees of belief must also obey them to reach their epistemic goal. This calibration argument meets some objections, but with a few revisions it can surmount those objections. It offers a good alternative to the Dutch book argument for compliance with the probability axioms. The defense of Shimony's calibration (...) argument examines rational pursuit of an epistemic goal, introduces strength of evidence and its measurement, and distinguishes epistemic goals and functions. (shrink)
Food products with genetically modified (GM) ingredients are common, yet many consumers are unaware of this. When polled, consumers say that they want to know whether their food contains GM ingredients, just as many want to know whether their food is natural or organic. Informing consumers is a major motivation for labeling. But labeling need not be mandatory. Consumers who want GM-free products will pay a premium to support voluntary labeling. Why do consumers want to know about GM ingredients? GM (...) foods are tested to ensure safety and have been on the market for more than a decade. Still, many consumers, including some with food allergies, want to be cautious. Also, GM crops may affect neighboring plants through pollen drift. Despite tests for environmental impact, some consumers may worry that GM crops will adversely effect the environment. The study of risk and its management raises questions not settled by the life sciences alone. This book surveys various labeling policies and the cases for them. It is the first comprehensive, interdisciplinary treatment of the debate about labeling genetically modified food. The contributors include philosophers, bioethicists, food and agricultural scientists, attorneys/legal scholars, and economists. (shrink)
How do rational agents coordinate in a single-stage, noncooperative game? Common knowledge of the payoff matrix and of each player's utility maximization among his strategies does not suffice. This paper argues that utility maximization among intentions and then acts generates coordination yielding a payoff-dominant Nash equilibrium. ‡I thank the audience at my paper's presentation at the 2006 PSA meeting for many insightful points. †To contact the author, please write to: Philosophy Department, University of Missouri, Columbia, MO 65211; e-mail: WeirichP@missouri.edu.
A computer simulation runs a model generating a phenomenon under investigation. For the simulation to be explanatory, the model has to be explanatory. The model must be isomorphic to the natural system that realizes the phenomenon. This paper elaborates the method of assessing a simulation's explanatory power. Then it illustrates the method by applying it to two simulations in game theory. The first is Brian Skyrms's (1990) simulation of interactive deliberations. It is intended to explain the emergence of a Nash (...) equilibrium in a noncooperative game. The second is Skyrms's (2004) simulation of the evolution of cooperation. It is intended to explain cooperation in assurance games. The final section suggests ways of enhancing the explanatory power of these simulations. (shrink)
Sunstein argues that heuristics misguide moral judgments. Principles that are normally sound falter in unusual cases. In particular, heuristics generate erroneous judgments about regulation of risks. Sunstein's map of moral reasoning omits some prominent contours. The simple heuristics he suggests neglect a reasoner's attempt to balance the pros and cons of regulating a risk.
Within traditional decision theory, common decision principles - e.g. the principle to maximize utility -- generally invoke idealization; they govern ideal agents in ideal circumstances. In Realistic Decision Theory, Paul Weirch adds practicality to decision theory by formulating principles applying to nonideal agents in nonideal circumstances, such as real people coping with complex decisions. Bridging the gap between normative demands and psychological resources, Realistic Decision Theory is essential reading for theorists seeking precise normative decision principles that acknowledge the limits and (...) difficulties of human decision-making. (shrink)
Game theory's paradoxes stimulate the study of rationality. Sometimes they motivate the revising of standard principles of rationality. Other times they call for revising applications of those principles or introducing supplementary principles of rationality. I maintain that rationality adjusts its demands to circumstances, and in ideal games of coordination it yields a payoff-dominant equilibrium.
Rachlin favors following patterns over making decisions case by case. However, his accounts of self-control and altruism do not establish the rationality of making decisions according to patterns. The best arguments for using patterns as a standard of evaluation appeal to savings in cognitive costs and compensation for irrational dispositions. What the arguments show depends on how they are elaborated and refined.
To handle epistemic and pragmatic risks, Gärdenfors and Sahlin (1982, 1988) design a decision procedure for cases in which probabilities are indeterminate. Their procedure steps outside the traditional expected utility framework. Must it do this? Can the traditional framework handle risk? This paper argues that it can. The key is a comprehensive interpretation of an option's possible outcomes. Taking possible outcomes more broadly than Gärdenfors and Sahlin do, expected utility can give risk its due. In particular, Good's (1952) decision procedure (...) adequately handles indeterminate probabilities and the risks they generate. (shrink)
Classical bargaining theory attempts to solve a bargaining problem using only the information about the problem contained in the representation of its possible outcomes in utility space. However, this information usually underdetermines the solution. I use additional information about interpersonal comparisons of utility and bargaining power. The solution is then the outcome that maximizes the sum of power-weighted utilities. I use these results to advance a contractarian argument for a utilitarian form of social cooperation. As the original position, I propose (...) a hypothetical situation in which the members of society are rational, fully informed, free, and equal. I argue that in this original position they would adopt a utilitarian form of social cooperation. I conclude that utilitarian cooperation constitutes a moral ideal toward which society ought to aspire. (shrink)
I will characterize the utilitarian and maximin rules of social choice game-theoretically. That is, I will introduce games whose solutions are the utilitarian and maximin distributions respectively. Then I will compare the rules by exploring similarities and differences between these games. This method of comparison has been carried out by others. But I characterize the two rules using games that involve bargaining within power structures. This new characterization better highlights the ethical differences between the rules.
Causal decision theory produces decision instability in cases such as Death in Damascus where a decision itself provides evidence concerning the utility of options. Several authors have proposed ways of handling this instability. William Harper (1985 and 1986) advances one of the most elegant proposals. He recommends maximizing causal expected utility among the options that are causally ratifiable. Unfortunately, Harper's proposal imposes certain restrictions; for instance, the restriction that mixed strategies are freely available. To obtain a completely general method of (...) handling decision instability, I step outside the confines of pure causal decision theory. I introduce a new kind of backtracking expected utility and propose maximizing it among the options that are causally ratifiable. In other words, I propose a hierarchical maximization of (1) conditional causal expected utility and (2) the new backtracking expected utility. I support this proposal with some intuitive considerations concerning the distinction between optimality and conditional optimality. And I prove that the proposal yields a solution in every finite decision problem. (shrink)
When a trustee makes a decision for a client, a standard objective is to decide as the client would if he had the trustee's information. How can this objective be attained when, given the trustee's information, there is still uncertainty about the consequences of alternative courses of action? A promising approach is to apply the rule to maximize expected utility using the client's utilities for consequences and the trustee's probabilities for states. But taking utilities and probabilities from different sources causes (...) a problem that has to be resolved. Briefly, the problem is that the client's utilities for consequences involve assessments of risks that are uninformed because he does not have informed probabilities. And the resolution of the problem is to reconstruct his utilities for consequences using a component due to risk that the trustee supplies for the client, and a component due to other consequences that the client supplies for himself. (shrink)
Some decision theorists criticize expected utility decision analysis and propose mean-risk decision analysis as a replacement. They claim that expected utility decision analysis neglects attitudes toward risk whereas mean-risk decision analysis accords these attitudes their proper status. However mean-risk decision analysis and expected utility decision analysis are not incompatible, and it is advantageous for decision theory to develop each in a way that complements the other. Here I present a mean-risk rule that governs preferences among options and options given states. (...) This mean-risk rule complements an expected utility rule that takes the utility of an option-state pair as the utility of the option given the state. I argue for the mean-risk rule using principles concerning basic intrinsic desires. The rule is comparative, but the last section offers some suggestions for its quantitative development. (shrink)
In a decision problem with a dynamic setting there is at least one option whose realization would change the expected utilities of options by changing the probability or utility function with respect to which the expected utilities of options are computed. A familiar example is Newcomb's problem. William Harper proposes a generalization of causal decision theory intended to cover all decision problems with dynamic settings, not just Newcomb's problem. His generalization uses Richard Jeffrey's ideas on ratifiability, and material from game (...) theory on mixed strategies. Harper's proposal has two drawbacks, however. One concerns the mechanism for choosing among ratifiable options. The other concerns the proposal's reliance upon mixed strategies. Here I make another proposal that eliminates these two drawbacks. (shrink)
The rule to maximize expected utility is intended for decisions where options involve risk. In those decisions the decision maker's attitude toward risk is important, and the rule ought to take it into account. Allais's and Ellsberg's paradoxes, however, suggest that the rule ignores attitudes toward risk. This suggestion is supported by recent psychological studies of decisions. These studies present a great variety of cases where apparently rational people violate the rule because of aversion or attraction to risk. Here I (...) attempt to resolve the issue concerning expected utility and risk. I distinguish two versions of the rule to maximize expected utility. One adopts a broad interpretation of the consequences of an option and has great intuitive appeal. The other adopts a narrow interpretation of the consequences of an option and seems to have certain technical and practical advantages. I contend that the version of the rule that interprets consequences narrowly does indeed neglect attitudes toward risk. That version of the rule excludes the risk involved in an option from the consequences of the option and, contrary to what is usually claimed, cannot make up for this exclusion through adjustments in probability and utility assignments. I construct a new, general argument that establishes this in a rigorous way. On the other hand, I contend that the version of the rule that interprets consequences broadly takes account of attitudes toward risk by counting the risk involved in an option among the consequences of the option. I rebut some objections to this version of the rules, in particular, the objection that the rule lacks practical interest. Drawing upon the literature on 'mean-risk' decision rules, I show that this version of the rule can be used to solve some realistic decision problems. (shrink)