Probabilistic models have much to offer to philosophy. We continually receive information from a variety of sources: from our senses, from witnesses, from scientific instruments. When considering whether we should believe this information, we assess whether the sources are independent, how reliable they are, and how plausible and coherent the information is. Bovens and Hartmann provide a systematic Bayesian account of these features of reasoning. Simple Bayesian Networks allow us to model alternative assumptions about the nature of the information sources. (...) Measurement of the coherence of information is a controversial matter: arguably, the more coherent a set of information is, the more confident we may be that its content is true, other things being equal. The authors offer a new treatment of coherence which respects this claim and shows its relevance to scientific theory choice. Bovens and Hartmann apply this methodology to a wide range of much discussed issues regarding evidence, testimony, scientific theories, and voting. Bayesian Epistemology is an essential tool for anyone working on probabilistic methods in philosophy, and has broad implications for many other disciplines. (shrink)
In their recently published book Nudge (2008) Richard H. Thaler and Cass R. Sunstein (T&S) defend a position labelled as ‘libertarian paternalism’. Their thinking appeals to both the right and the left of the political spectrum, as evidenced by the bedfellows they keep on either side of the Atlantic. In the US, they have advised Barack Obama, while, in the UK, they were welcomed with open arms by the David Cameron's camp (Chakrabortty 2008). I will consider the following questions. What (...) is Nudge? How is it different from social advertisement? Does Nudge induce genuine preference change? Does Nudge build moral character? Is there a moral difference between the use of Nudge as opposed to subliminal images to reach policy objectives? And what are the moral constraints on Nudge? (shrink)
There are three slogans in the history of Socialism that are very close in wording, viz. the famous Cabet-Blanc-Marx slogan: "From each according to his ability; To each according to his needs"; the earlier Saint-Simon-Pecqueur slogan: "To each according to his ability; To each according to his works"; and the later slogan in Stalin’s Soviet Constitution: "From each according to his ability; To each according to his work." We will consider the following questions regarding these slogans: a) What are the (...) earliest occurrences of each of these slogans? b) Where does the inspiration for each half of each slogan come from? c) What do the Saint-Simonians mean by “To each according to his ability”? d) What do they mean by “To each according to his works”? e) What motivates the shift from “To each according to his ability” to “From each according to his ability”? f) How should we envisage the progression toward “To each according to his needs”? g) What is the distinction between from “To each according to his works” and “To each according to his work”? (shrink)
Bayesian Coherence Theory of Justification or, for short, Bayesian Coherentism, is characterized by two theses, viz. (i) that our degree of confidence in the content of a set of propositions is positively affected by the coherence of the set, and (ii) that coherence can be characterized in probabilistic terms. There has been a longstanding question of how to construct a measure of coherence. We will show that Bayesian Coherentism cannot rest on a single measure of coherence, but requires a vector (...) whose components exhaustively characterize the coherence properties of the set. Our degree of confidence in the content of the information set is a function of the reliability of the sources and the components of the coherence vector. The components of this coherence vector are weakly but not strongly separable, which blocks the construction of a single coherence measure. (shrink)
This paper addresses a problem for theories of epistemic democracy. In a decision on a complex issue which can be decomposed into several parts, a collective can use different voting procedures: Either its members vote on each sub-question and the answers that gain majority support are used as premises for the conclusion on the main issue, or the vote is conducted on the main issue itself. The two procedures can lead to different results. We investigate which of these procedures is (...) better as a truth-tracker, assuming that there exists a true answer to be reached. On the basis of the Condorcet jury theorem, we show that the pbp is universally superior if the objective is to reach truth for the right reasons. If one instead is after truth for whatever reasons, right or wrong, there will be cases in which the cbp is more reliable, even though, for the most part, the pbp still is to be preferred. (shrink)
The coherentist theory of justification provides a response to the sceptical challenge: even though the independent processes by which we gather information about the world may be of dubious quality, the internal coherence of the information provides the justification for our empirical beliefs. This central canon of the coherence theory of justification is tested within the framework of Bayesian networks, which is a theory of probabilistic reasoning in artificial intelligence. We interpret the independence of the information gathering processes (IGPs) in (...) terms of conditional independences, construct a minimal sufficient condition for a coherence ranking of information sets and assess whether the confidence boost that results from receiving information through independent IGPs is indeed a positive function of the coherence of the information set. There are multiple interpretations of what constitute IGPs of dubious quality. Do we know our IGPs to be no better than randomization processes? Or, do we know them to be better than randomization processes but not quite fully reliable, and if so, what is the nature of this lack of full reliability? Or, do we not know whether they are fully reliable or not? Within the latter interpretation, does learning something about the quality of some IGPs teach us anything about the quality of the other IGPs? The Bayesian-network models demonstrate that the success of the coherentist canon is contingent on what interpretation one endorses of the claim that our IGPs are of dubious quality. (shrink)
A coherent story is a story that fits together well. This notion plays a central role in the coherence theory of justification and has been proposed as a criterion for scientific theory choice. Many attempts have been made to give a probabilistic account of this notion. A proper account of coherence must not start from some partial intuitions, but should pay attention to the role that this notion is supposed to play within a particular context. Coherence is a property of (...) an information set that boosts our confidence that its content is true ceteris paribus when we receive information from independent and partially reliable sources. We construct a measure cr that relies on hypothetical sources with certain idealized characteristics. A maximally coherent information set, i.e. a set with equivalent propositions, affords a maximal confidence boost. cr is the ratio of the actual confidence boost over the confidence boost that we would have received, had the information been presented in the form of maximally coherent information, ceteris paribus. This measure is functionally dependent on the degree of reliability r of the sources. We use cr to construct a coherence quasi-ordering over information sets S and S’: S is no less coherent than S’ just in case c_r is not smaller than c_r for any value of the reliability parameter. We show that, on our account, the coherence of the story about the world gives us a reason to believe that the story is true and that the coherence of a scientific theory, construed as a set of models, is a proper criterion for theory choice. (shrink)
Hope obeys Aristotle's doctrine of the mean: one should neither hope too much, nor too little. But what determines what constitutes too much and what constitutes too little for a particular person at a particular time? The sceptic presents an argument to the effect that it is never rational to hope. An attempt to answer the sceptic leads us in different directions. Decision-theoretic and preference-theoretic arguments support the instrumental value of hope. An investigation into the nature of hope permits us (...) to assess the intrinsic value of hope. However, it must be granted to the sceptic that there is a tension between hope and epistemic rationality. I conclude with some reflections about the relationship between hope and character features that are constitutive of inner strength. (shrink)
John Locke proposed a straightforward relationship between qualitative and quantitative doxastic notions: belief corresponds to a sufficiently high degree of confidence. Richard Foley has further developed this Lockean thesis and applied it to an analysis of the preface and lottery paradoxes. Following Foley's lead, we exploit various versions of these paradoxes to chart a precise relationship between belief and probabilistic degrees of confidence. The resolutions of these paradoxes emphasize distinct but complementary features of coherent belief. These features suggest principles that (...) tie together qualitative and quantitative doxastic notions. We show how these principles may be employed to construct a quantitative model - in terms of degrees of confidence - of an agent's qualitative doxastic state. This analysis fleshes out the Lockean thesis and provides the foundation for a logic of belief that is responsive to the logic of degrees of confidence. (shrink)
I discuss various questions concerning secular hopes in the face of death, that is, hopes other than the hope for eternal life. What is it to hope that one has lived a worthwhile life? Is there some contemporary analogue to Aristotle claim that death on the battlefield is the most desirable death? “After me the downfall” said Louis XV—what interest should we take in the world in which we are no more? In her poem “Song” Christina Rossetti asks her beloved (...) not sing sad songs when she dies. What kind of attitudes do we hope others will have toward us after we are dead? (shrink)
Some of the challenges in Sanders et al. (this issue) can be aptly illustrated by means of charity nudges, that is, nudges designed to increase charitable donations. These nudges raise many ethical questions. First, Oxfam’s triptychs with suggested donations are designed to increase giving. If successful, do our actions match ex ante or ex post preferences? Does this make a difference to the autonomy of the donor? Second, the Behavioural Insights Team conducted experiments using social networks to nudge people to (...) give more. Do these appeals steer clear of exploiting power relations? Do they respect boundaries of privacy? Third, in an online campaign by Kiva, donors are asked to contribute directly to personalized initiatives. In many cases, the initiative has already been funded and donor money is funnelled to a new cause. Is such a “pre-disbursal” arrangement truthful and true to purpose as a social business model? (shrink)
If we receive information from multiple independent and partially reliable information sources, then whether we are justified to believe these information items is affected by how reliable the sources are, by how well the information coheres with our background beliefs and by how internally coherent the information is. We consider the following question. Is coherence a separable determinant of our degree of belief, i.e. is it the case that the more coherent the new information is, the more justified we are (...) in believing the new information, ceteris paribus? We show that if we consider sets of information items of any size (Holism), and if we assume that there exists a coherence Ordering over such sets and that coherence is a function of the probability distribution over the propositions in such sets (Probabilism), then Separability fails to hold. (shrink)
If you believe more things you thereby run a greater risk of being in error than if you believe fewer things. From the point of view of avoiding error, it is best not to believe anything at all, or to have very uncommitted beliefs. But considering the fact that we all in fact do entertain many specific beliefs, this recommendation is obviously in flagrant dissonance with our actual epistemic practice. Let us call the problem raised by this apparent conflict the (...) Addition Problem. In this paper we will find reasons to reject a particular premise used in the formulation of the Addition Problem, namely, the fundamental premise according to which believing more things increases the risk of error. As we will see, acquiring more beliefs need not decrease the probability of the whole, and hence need not increase the risk of error. In fact, more beliefs can mean an increase in the probability of the whole and a corresponding decrease in the risk of error. We will consider the Addition Problem as it arises in the context of the coherence theory of epistemic justification, while keeping firmly in mind that the point we wish to make is of epistemological importance also outside the specific coherentist dispute. The problem of determining exactly how the probability of the whole system depends on such factors as coherence, reliability and independence will be seen to open up an interesting area of research in which the theory of conditional independence structures is a helpful tool. (shrink)
In “Judy Benjamin is a Sleeping Beauty” (2010) Bovens recognises a certain similarity between the Sleeping Beauty (SB) and the Judy Benjamin (JB). But he does not recognise the dissimilarity between underlying protocols (as spelled out in Shafer (1985). Protocols are expressed in conditional probability tables that spell out the probability of coming to learn various propositions conditional on the actual state of the world. The principle of total evidence requires that we not update on the content of the proposition (...) learned but rather on the fact that we learn the proposition in question. Now attention to protocols drives a wedge between the SB and the JB. We have shown that the solution to a close variant of the SB which involves a clear protocol is P*(Heads) = 1/3 and since Beauty’s has precisely the same information at her disposal in the original SB at the time that she is asked to state her credence for Heads, the same solution should hold. The solution to the JB, on the other hand, is dependent on Judy’s probability distribution over protocols. One reasonable protocol yields P(Red) = 1/2, but Judy could also defend alternative values or a range of values in the interval [1/3, 1/2] depending on her probability distribution over protocols. (shrink)
Utilitarianism, it has been said, is not sensitive to the distribution of welfare. In making risky decisions for others there are multiple sensitivities at work. I present examples of risky decision-making involving drug allocations, charitable giving, breast-cancer screening and C-sections. In each of these examples there is a different sensitivity at work that pulls away from the utilitarian prescription. Instances of saving fewer people at a greater risk to many is more complex because there are two distributional sensitivities at work (...) that pull in opposite directions from the utilitarian calculus. I discuss objections to these sensitivities and conclude with some reflections on the value of formal modelling in thinking about societal risk. (shrink)
In this commentary on Yashar Saghai's article "Salvaging the Concept of Nudge" (JME 2013) I discuss his distinction between a 'prod' (which is 'substantially controlling') and a 'nudge' (which is ‘substantially non-controlling’).
Choice often proceeds in two stages: We construct a shortlist on the basis of limited and uncertain information about the options and then reduce this uncertainty by examining the shortlist in greater detail. The goal is to do well when making a final choice from the option set. I argue that we cannot realise this goal by constructing a ranking over the options at shortlisting stage which determines of each option whether it is more or less worthy of being included (...) in a shortlist. This is relevant to the 2010 UK Equality Act. The Act requires that shortlists be constructed on grounds of candidate rankings and affirmative action is only permissible for equally qualified candidates. This is misguided: Shortlisting candidates with lower expected qualifications but higher variance may raise the chance of finding an exceptionally strong candidate. If it does, then shortlisting such candidates would make eminent business sense and there is nothing unfair about it. This observation opens up room for including more underrepresented candidates with protected characteristics, as they are more likely to display greater variance in the selector’s credence functions at shortlisting stage. (shrink)
Why does an offender find it upsetting when the victim of their wrongdoing refuses to accept their apologies? Why do they find it upsetting when the victim is unwilling to grant them the forgiveness that they are asking for? I present an account of apologising and accepting apologies that can explain why this distress into an apt emotion.
Corroborating Testimony, Probability and Surprise’, Erik J. Olsson ascribes to L. Jonathan Cohen the claims that if two witnesses provide us with the same information, then the less probable the information is, the more confident we may be that the information is true (C), and the stronger the information is corroborated (C*). We question whether Cohen intends anything like claims (C) and (C*). Furthermore, he discusses the concurrence of witness reports within a context of independent witnesses, whereas the witnesses in (...) Olsson's model are not independent in the standard sense. We argue that there is much more than, in Olsson's words, ‘a grain of truth’ to claim (C), both on his own characterization as well as on Cohen's characterization of the witnesses. We present an analysis for independent witnesses in the contexts of decision-making under risk and decision-making under uncertainty and generalize the model for n witnesses. As to claim (C*), Olsson's argument is contingent on the choice of a particular measure of corroboration and is not robust in the face of alternative measures. Finally, we delimit the set of cases to which Olsson's model is applicable. 1 Claim (C) examined for Olsson's characterization of the relationship between the witnesses 2 Claim (C) examined for two or more independent witnesses 3 Robustness and multiple measures of corroboration 4 Discussion. (shrink)
We appeal to the theory of Bayesian Networks to model different strategies for obtaining confirmation for a hypothesis from experimental test results provided by less than fully reliable instruments. In particular, we consider (i) repeated measurements of a single test consequence of the hypothesis, (ii) measurements of multiple test consequences of the hypothesis, (iii) theoretical support for the reliability of the instrument, and (iv) calibration procedures. We evaluate these strategies on their relative merits under idealized conditions and show some surprising (...) repercussions on the variety-of-evidence thesis and the Duhem-Quine thesis. (shrink)
I investigate whether any plausible moral arguments can be made for ‘grandfathering’ emission rights (that is, for setting emission targets for developed countries in line with their present or past emission levels) on the basis of a Lockean theory of property rights.
Belgium has recently extended its euthanasia legislation to minors, making it the first legislation in the world that does not specify any age limit. I consider two strands in the opposition to this legislation. First, I identify five arguments in the public debate to the effect that euthanasia for minors is somehow worse than euthanasia for adults—viz. arguments from weightiness, capability of discernment, pressure, sensitivity and sufficient palliative care—and show that these arguments are wanting. Second, there is another position in (...) the public debate that wishes to keep the current age restriction on the books and have ethics boards exercise discretion in euthanasia decisions for minors. I interpret this position on the background of Velleman’s “Against the Right to Die” and show that, although costs remain substantial, it actually can provide some qualified support against extending euthanasia legislation to minors. (shrink)
Risky prospects represent policies that impose different types of risks on multiple people. I present an example from food safety. A utilitarian following Harsanyi's Aggregation Theorem ranks such prospects according to their mean expected utility or the expectation of the social utility. Such a ranking is not sensitive to any of four types of distributional concerns. I develop a model that lets the policy analyst rank prospects relative to the distributional concerns that she considers fitting in the context at hand. (...) I name this model ‘the Distribution View’ posing an alternative to Parfit's Priority View for risky prospects. (shrink)
There is a cognitive, an affective, a conative, and an attitudinal component to a genuine apology. In discussing these components, I address the following questions. Might apologies be due for non-culpable actions? Might apologies be due for choices in moral dilemmas? What is the link between sympathy, remorse and making amends? Is it meaningful for resilient akratics to apologize? How much moral renewal is required when one apologizes? Why should apologies be offered in a humble manner? And is there some (...) truth to P. G. Wodehouse's dictum that 'the right sort of people do not want apologies'? (shrink)
The extent to which EU countries take on their ‘fair share’ of asylum seekers is a contentious issue. Luc Bovens and Jane von Rabenau write on concern within Germany that the country is taking on a higher burden than other EU states. They argue that when compared on a per capita basis with similar EU countries, Germany performs relatively poorly in terms of acceptances for new refugees. Where Germany performs better is with respect to the size of the existing refugee (...) population within the country, however this may be a reflection of the difficulty refugees experience in gaining German citizenship. (shrink)
In the novel "Het Been" by the Flemish writer Willem Elsschot. In the novel, a businessman becomes obsessive over the fact that a victim of his unscrupulous business practices refuses to forgive him. This raises the following questions: Why does one find it upsetting when the victim of one's wrongdoing refuses to accept our apologies? Why does one find it upsetting when the victim is unwilling to grant us the forgiveness that we are asking for?
The Puzzle of the Hats is a puzzle in social epistemology. It describes a situation in which a group of rational agents with common priors and common goals seems vulnerable to a Dutch book if they are exposed to different information and make decisions independently. Situations in which this happens involve violations of what might be called the Group-Reflection Principle. As it turns out, the Dutch book is flawed. It is based on the betting interpretation of the subjective probabilities, but (...) ignores the fact that this interpretation disregards strategic considerations that might influence betting behavior. A lesson to be learned concerns the interpretation of probabilities in terms of fair bets and, more generally, the role of strategic considerations in epistemic contexts. Another lesson concerns Group-Reflection, which in its unrestricted form is highly counter-intuitive. We consider how this principle of social epistemology should be re-formulated so as to make it tenable. (shrink)
There are two seemingly unrelated paradoxes of democracy. The older one is the doctrinal paradox or the discursive dilemma. or a comprehensive bibliography, see List 1995. The younger one is the mixed motivation problem introduced by Jonathan Wolff (1994) in this journal. In the mixed motivation problem, we have voters with mixed Benthamite and Rousseauian motivations who reach a majority on an issue that is neither in the self-interest of a majority of the voters, nor considered to be conducive to (...) the common good by a majority of the voters. What has gone unnoticed so far is that both of these paradoxes share a common structure. (shrink)
We construct a probabilistic coherence measure for information sets which determines a partial coherence ordering. This measure is applied in constructing a criterion for expanding our beliefs in the face of new information. A number of idealizations are being made which can be relaxed by an appeal to Bayesian Networks.
The Tragedy of the Commons is often associated with an n-person Prisoner’s Dilemma. But it can also have the structure of an n-person Game of Chicken, an Assurance Game, or of a Voting Games (or a Three-in-a-Boat Game). I present three historical stories that document tragedies of the commons, as presented in Aristotle, Mahanarayan and Hume and argue that the descriptions of these historical cases align better with Voting Games than with any other games.
Some proponents of the pro-life movement argue against morning after pills, IUDs, and contraceptive pills on grounds of a concern for causing embryonic death. What has gone unnoticed, however, is that the pro-life line of argumentation can be extended to the rhythm method of contraception as well. Given certain plausible empirical assumptions, the rhythm method may well be responsible for a much higher number of embryonic deaths than some other contraceptive techniques.
Nancy Cartwright is one of the most distinguished and influential contemporary philosophers of science. Despite the profound impact of her work, there is neither a systematic exposition of Cartwright’s philosophy of science nor a collection of articles that contains in-depth discussions of the major themes of her philosophy. This book is devoted to a critical assessment of Cartwright’s philosophy of science and contains contributions from Cartwright's champions and critics. Broken into three parts, the book begins by addressing Cartwright's views on (...) the practice of model building in science and the question of how models represent the world before moving on to a detailed discussion of methodologically and metaphysically challenging problems. Finally, the book addresses Cartwright's original attempts to clarify profound questions concerning the metaphysics of science. With contributions from leading scholars, such as Ronald N. Giere and Paul Teller, this unique volume will be extremely useful to philosophers of science the world over. (shrink)
The Distribution View provides a model that integrates four distributional concerns in the evaluation of risky prospects. Starting from these concerns, we can generate an ordering over a set of risky prospects, or, starting from an ordering, we can extract a characterization of the underlying distributional concerns. Separability of States and/or Persons for multiple-person risky prospects, for single-person risky prospects and for multiple-person certain prospects are discussed within the model. The Distribution View sheds light on public health policies and provides (...) a framework for the discussion of Parfit's Priority View for risky prospects. (shrink)
I argue that by constructing an identity of Bohemian whim and spontaneity one can make what was previously an akratic action into a fully rational action, since in performing the action, one asserts one identity.
The Puzzle of the Hats is a betting arrangement which seems to show that a Dutch book can be made against a group of rational players with common priors who act in the common interest and have full trust in the other players’ rationality. But we show that appearances are misleading—no such Dutch book can be made. There are four morals. First, what can be learned from the puzzle is that there is a class of situations in which credences and (...) betting rates diverge. Second, there is an analogy between ways of dealing with situations of this kind and different policies for sequential choice. Third, there is an analogy with strategic voting, showing that the common interest is not always served by expressing how things seem to you in social decision-making. (shrink)
We consider a special set of risky prospects in which the outcomes are either life or death. There are various alternatives to the utilitarian objective of minimizing the expected loss of lives in such prospects. We start off with the two-person case with independent risks and construct taxonomies of ex ante and ex post evaluations for such prospects. We examine the relationship between the ex ante and the ex post in this restrictive framework: There are more possibilities to respect ex (...) ante and ex post objectives simultaneously than in the general framework, i.e. without the restriction to binary utilities. We extend our results to n persons and to dependent risks. We study optimal strategies for allocating risk reductions given different objectives. We place our results against the backdrop of various pro-poorly off value functions for the evaluation of risky prospects. (shrink)
The Puzzle of the Hats is a betting arrangement which seems to show that a Dutch book can be made against a group of rational players with common priors who act in the common interest and have full trust in the other players’ rationality. But we show that appearances are misleading—no such Dutch book can be made. There are four morals. First, what can be learned from the puzzle is that there is a class of situations in which credences and (...) betting rates diverge. Second, there is an analogy between ways of dealing with situations of this kind and different policies for sequential choice. Third, there is an analogy with strategic voting, showing that the common interest is not always served by expressing how things seem to you in social decision-making. And fourth, our analysis of the Puzzle of the Hats casts light on a recent controversy about the Dutch book argument for the Sleeping Beauty. (shrink)
Blackburn argues that moral supervenience in conjunction with the lack of entailments from naturalistic to moral judgments poses a challenge to moral realism. Klagge and McFetridge try to avert the challenge by appealing to synthetically necessary connections between natural and moral properties. Blackburn rejoins that, even if there are such connections, the challenge still remains. We remain agnostic on the question whether there are such connections, but argue against Blackburn that, if there are indeed such connections, then the challenge to (...) moral realism, properly phrased, does not hold up. (shrink)
Suppose a committee or a jury confronts a complex question, the answer to which requires attending to several sub-questions. Two different voting procedures can be used. On one, the committee members vote on each sub-question and the voting results are used as premises for the committee’s conclusion on the main issue. This premise-based procedure can be contrasted with the conclusion-based approach, which requires the members to directly vote on the conclusion, with the vote of each member being guided by her (...) views on the relevant sub-questions. The two procedures are not equivalent: There may be a majority of voters supporting each of the premises, but if these majorities do not significantly overlap, there will be a majority against the conclusion. Pettit (2001) connects the choice between the two procedures with the discussion of deliberative democracy. The problem we want to examine instead concerns the relative advantages and disadvantages of the two procedures from the epistemic point of view. Which of them is better when it comes to tracking truth? As it turns out, the answer is not univocal. On the basis of Condorcet’s jury theorem, the premise-based procedure can be shown to be superior if the objective is reach truth for the right reasons, without making any mistakes on the way. However, if the goal instead is to reach truth for whatever reasons, right or wrong, there will be cases in which using the conclusion-based procedure turns out to be more reliable, even though, for the most part, the premise-based procedure will retain its superiority. (shrink)
I argue in this paper that there are two considerations which govern the dynamics of a two-person bargaining game, viz. relative proportionate utility loss from conceding to one's opponent's proposal and relative non-proportionate utility loss from not conceding to one's opponent's proposal, if she were not to concede as well. The first consideration can adequately be captured by the information contained in vNM utilities. The second requires measures of utility which allow for an interpersonal comparison of utility differences. These considerations (...) respectively provide for a justification of the Nash solution and the Kalai egalitarian solution. However, none of these solutions taken by themselves can provide for a full story of bargaining, since, if within a context of bargaining one such consideration is overriding, the solution which does not match this consideration will yield unreasonable results. I systematically present arguments to the effect that each justification from self-interest for respectively the Nash and the Kalai egalitarian solution is vulnerable to this kind of objection. I suggest that the search for an integrative model may be a promising line of research. (shrink)
We develop a utilitarian framework to assess different decision rules for the European Council of Ministers. The proposals to be decided on are conceptualized as utility vectors and a probability distribution is assumed over the utilities. We first show what decision rules yield the highest expected utilities for different means of the probability distri- bution. For proposals with high mean utility, simple bench- mark rules (such as majority voting with proportional weights) tend to outperform rules that have been proposed in (...) the political arena. For proposals with low mean utility, it is the other way round. We then compare the expected utilities for smaller and larger countries and look for Pareto- dominance relations. Finally, we provide an extension of the model, discuss its restrictions, and compare our approach with assessments of decision rules that are based on the Penrose measure of voting power. (shrink)
There are two curious features about the backward induction argument (BIA) to the effect that repeated non-cooperation is the rational solution to the finite iterated prisoner’s dilemma (FIPD). First, however compelling the argument may seem, one remains hesitant either to recommend this solu- tion to players who are about to engage in cooperation or to explain cooperation as a deviation from rational play in real-life FIPD’s. Second, there seems to be a similarity between the BIA for the FIPD and the (...) surprise exam paradox (SEP) and one cannot help but wonder whether the former is indeed no more than an instance of the latter. I argue that there is an important difference between the BIA for the FIPD and the SEP, but that a comparison to the SEP can help us understand why the conclusion of the BIA for the FIPD strikes us as a counterintuitive solution to real-life FIPD’s. (shrink)