A decision theory is self-recommending if, when you ask it which decision theory you should use, it considers itself to be among the permissible options. I show that many alternatives to expected utility theory are not self-recommending, and I argue that this tells against them.
According to a common judgement, a social planner should often use a lottery to decide which of two people should receive a good. This judgement undermines one of the best-known arguments for utilitarianism, due to John C. Harsanyi, and more generally undermines axiomatic arguments for utilitarianism and similar views. In this paper we ask which combinations of views about (a) the social planner’s attitude to risk and inequality, and (b) the subjects’ attitudes to risk are consistent with the aforementioned judgement. (...) We find that the class of combinations of views that can plausibly accommodate this judgement is quite limited. But one theory does better than others: the theory of chance-sensitive utility. (shrink)
Orthodox causal decision theory is unstable. Its advice changes as you make up your mind about what you will do. Several have objected to this kind of instability and explored stable alternatives. Here, I'll show that explorers in search of stability must part with a vestige of their homeland. There is no plausible stable decision theory which satisfies Savage's Sure Thing Principle. So those in search of stability must learn to live without it.
A de minimis risk is defined as a risk that is so small that it may be legitimately ignored when making a decision. While ignoring small risks is common in our day-to-day decision making, attempts to introduce the notion of a de minimis risk into the framework of decision theory have run up against a series of well-known difficulties. In this paper, I will develop an enriched decision theoretic framework that is capable of overcoming two major obstacles to the modelling (...) of de minimis risk. The key move is to introduce, into decision theory, a non-probabilistic conception of risk known as normic risk. (shrink)
A social planner who evaluates risky public policies in light of the other risks with which their society will be faced should judge favourably some such policies even though they would deem them too risky when considered in isolation. I suggest that a longtermist would—or at least should—evaluate risky polices in light of their prediction about future risks; hence, longtermism supports social risk-taking. I consider two formal versions of this argument, discuss the conditions needed for the argument to be valid, (...) and briefly compare these conditions to some risky policy options with which actual public decision-makers are faced. (shrink)
Suppose an agent is choosing between rescuing more people with a lower probability of success, and rescuing fewer with a higher probability of success. How should they choose? Our moral judgments about such cases are not well-studied, unlike the closely analogous non-moral preferences over monetary gambles. In this paper, I present an empirical study which aims to elicit the moral analogues of our risk preferences, and to assess whether one kind of evidence – concerning how they depend on outcome probabilities (...) – can debunk them. I find significant heterogeneity in our moral risk preferences – in particular, moral risk-seeking and risk-neutrality are surprisingly popular. I also find that subjects’ judgments aren’t probability-dependent, thus providing an empirical defence against debunking arguments from probability dependence. (shrink)
When you are indifferent between two options, it’s rationally permissible to take either. One way to decide between two such options is to flip a fair coin, taking one option if it lands heads and the other if it lands tails. Is it rationally permissible to employ such a tie-breaking procedure? Intuitively, yes. However, if you are genuinely risk-averse—in particular, if you adhere to risk-weighted expected utility theory (Buchak in Risk and rationality, Oxford University Press, 2013) and have a strictly (...) convex risk-function—the answer will often be no: the REU of deciding by coin-flip will be lower than the REU of choosing one of the options outright (so long as at least one of the options is a nondegenerate gamble). This turns out to be a significant worry for risk-weighted expected utility theory. I argue that it adds real bite to established worries about diachronic consistency afflicting views, like risk-weighted expected utility theory, that violate Independence. And that, while these worries might be surmountable, surmounting them comes at a price. (shrink)
All ordinary decisions involve some risk. If I go outside for a walk, I may trip and injure myself. But if I don’t go for a walk, I slightly increase my chances of cardiovascular disease. Typically, we disregard most small risks. When, for practical purposes, is it appropriate for one to ignore risk? This issue looms large because many activities performed by those in wealthy societies, such as driving a car, in some way risk contributing to climate harms. Are these (...) activities morally appropriate? -/- In this paper, I first summarize and respond to some arguments that purport to show that it is appropriate to ignore or discount very small risks. I argue that because our rationality is bounded, it is impossible for us to include every small risk in our decision-making process, and so we may reasonably use heuristics to guide many decisions. However, contrary to some thinkers, I argue that this does not violate the spirit of expected value theory; it merely shows that we should adopt a so-called "two-level" view. Our use of heuristics allows for the reasonable ignoring of some risks, and this perhaps explains why one might be inclined to think that individual climate-related risks are negligible. However, virtually all greenhouse-gas emitting activities in fact have some climate risk on the negative side of the ledger, and the use of heuristics does not permit the general ignoring of climate-change-related risk by individuals on grounds of expediency of judgment and decision-making. (shrink)
This paper aims to scrutinize how the notion of risk should be understood and applied to possibly catastrophic cases. I begin with clarifying the standard usage of the notion of risk, particularly emphasizing the conceptual relation between risk and probability. Then, I investigate how to make decisions in the case of seemingly catastrophic disasters by contrasting the precautionary principle with the preventive (prevention) principle. Finally, I examine what kind of causal thinking tends to be actually adopted when we make decisions (...) on emergent cases. My arguments are mainly based on Japan's 2011 massive earthquake and nuclear power station accident. Masaki Ichinose. (shrink)
How should you evaluate your choices when you’re unsure what their outcomes will be? One popular answer is to rank your options in terms of their expected utilities. But what should you do when you think that the value of their respective outcomes might be incommensurable? In the face of incommensurable values, it no longer makes sense to speak of ranking your options according to expected utility. Are there any general principles to guide us when facing decisions of this kind? (...) If only! This chapter develops an impossibility result: it holds that there are a handful of independently plausible constraints that no such decision theory can jointly satisfy. The result, while depressing, can be used to helpfully classify extant approaches based on which of the constraints they violate. (shrink)
Do unknown and unrealized risks of harm diminish an individual’s well-being? The traditional answer is no: that the security of prudential goods benefits an individual only instrumentally or by virtue of their subjective sense of security. Recent work has argued, however, that the security of prudential goods non-instrumentally benefits an individual regardless of whether or not they enjoy subjective security. In this paper, I critically examine three claims about the way in which unknown and unrealized risks of harm might diminish (...) individual well-being: it frustrates a desire to be secure, it frustrates the enjoyment of modally-robust goods, and it undermines the ability to make reasonable plans. Ultimately, I argue that all three of these hypotheses are mistaken, but that they deepen our understanding of the ways in which subjective security is an important constituent of individual well-being. (shrink)
For the purpose of this analysis, risk assessment becomes the primary term and risk management the secondary term. The concept of risk management as a primary term is based upon a false ontology. Risk management implies that risk is already there, not created by the decision, but lies already inherent in the situation that the decision sets into motion. The risk that already exists in the objective situation simply needs to be “managed”. By considering risk assessment as the primary term, (...) the ethics of responsibility for risking the lives of others, the environment and future generations in the first place comes into the forefront. The issue of risk heeding is especially important as it highlights the need to pay attention to warnings of danger and to take action to redress problems before disasters occur. In this paper, the decision making that led to the choice of technology utilized and the implementation of such technology in the case of the space shuttle Challenger disaster will be used as a model to illustrate the need to take ethical factors into account when making decisions regarding the safety of technological systems and the heeding of danger warnings. While twenty-five years separates the decision to launch the Challenger and the Fukushima Daiichi nuclear plant disaster, the lessons of the Challenger disaster are still to be learned. (shrink)