_Rationality Through Reasoning_ answers the question of how people are motivated to do what they believe they ought to do, built on a comprehensive account of normativity, rationality and reasoning that differs significantly from much existing philosophical thinking. Develops an original account of normativity, rationality and reasoning significantly different from the majority of existing philosophical thought Includes an account of theoretical and practical reasoning that explains how reasoning is something we ourselves do, rather than something that happens in us Gives (...) an account of what reasons are and argues that the connection between rationality and reasons is much less close than many philosophers have thought Contains rigorous new accounts of oughts including owned oughts, agent-relative reasons, the logic of requirements, instrumental rationality, the role of normativity in reasoning, following a rule, the correctness of reasoning, the connections between intentions and beliefs, and much else. Offers a new answer to the ‘motivation question’ of how a normative belief motivates an action. (shrink)
This study uses techniques from economics to illuminate fundamental questions in ethics, particularly in the foundations of utilitarianism. Topics considered include the nature of teleological ethics, the foundations of decision theory, the value of equality and the moral significance of a person's continuing identity through time.
Normative requirements are often overlooked, but they are central features of the normative world. Rationality is often thought to consist in acting for reasons, but following normative requirements is also a major part of rationality. In particular, correct reasoning – both theoretical and practical – is governed by normative requirements rather than by reasons. This article explains the nature of normative requirements, and gives examples of their importance. It also describes mistakes that philosophers have made as a result of confusing (...) normative requirements with reasons. (shrink)
We are often faced with choices that involve the weighing of people's lives against each other, or the weighing of lives against other good things. These are choices both for individuals and for societies. A person who is terminally ill may have to choose between palliative care and more aggressive treatment, which will give her a longer life but at some cost in suffering. We have to choose between the convenience to ourselves of road and air travel, and the lives (...) of the future people who will be killed by the global warming we cause, through violent weather, tropical disease, and heat waves. We also make choices that affect how many lives there will be in the future: as individuals we choose how many children to have, and societies choose tax policies that influence people's choices about having children. These are all problems of weighing lives. How should we weigh lives? Weighing Lives develops a theoretical basis for answering this practical question. It extends the work and methods of Broome's earlier book Weighing Goods to cover the questions of life and death. Difficult problems come up in the process. In particular, Weighing Lives tackles the well-recognized, awkward problems of the ethics of population. It carefully examines the common intuition that adding people to the population is ethically neutral - neither a good nor a bad thing - but eventually concludes this intuition cannot be fitted into a coherent theory of value. In the course of its argument, Weighing Lives examines many of the issues of contemporary moral theory: the nature of consequentialism and teleology; the transitivity, continuity, and vagueness of betterness; the quantitative conception of wellbeing; the notion of a life worth living; the badness of death; and others. This is a work of philosophy, but one of its distinctive features is that it adopts some of the precise methods of economic theory (without introducing complex mathematics). Not only philosophers, but also economists and political theorists concerned with the practical question of valuing life, should find the book's conclusions highly significant to their work. (shrink)
Esteemed philosopher John Broome avoids the familiar ideological stances on climate change policy and examines the issue through an invigorating new lens. As he considers the moral dimensions of climate change, he reasons clearly through what universal standards of goodness and justice require of us, both as citizens and as governments. His conclusions—some as demanding as they are logical—will challenge and enlighten. Eco-conscious readers may be surprised to hear they have a duty to offset all their carbon emissions, while policy (...) makers will grapple with Broome’s analysis of what if anything is owed to future generations. From the science of greenhouse gases to the intricate logic of cap and trade, Broome reveals how the principles that underlie everyday decision making also provide simple and effective ideas for confronting climate change. Climate Matters is an essential contribution to one of the paramount issues of our time. (shrink)
This paper is a response to ‘Why Be Rational?’ by Niko Kolodny. Kolodny argues that we have no reason to satisfy the requirements of rationality. His argument assumes that these requirements have a logically narrow scope. To see what the question of scope turns on, this comment provides a semantics for ‘requirement’. It shows that requirements of rationality have a wide scope, at least under one sense of ‘requirement’. Consequently Kolodny's conclusion cannot be derived.
Several philosophers deny that an individual person’s emissions of greenhouse gas do any harm; I call these “individual denialists.” I argue that each individual’s emissions may do harm, and that they certainly do expected harm. I respond to the denialists’ arguments.
Many economic problems are also ethical problems: should we value economic equality? how much should we care about preserving the environment? how should medical resources be divided between saving life and enhancing life? This book examines some of the practical issues that lie between economics and ethics, and shows how utility theory can contribute to ethics. John Broome's work has, unusually, combined sophisticated economic and philosophical expertise, and Ethics Out of Economics brings together some of his most important essays, augmented (...) with an updated introduction. The first group of essays deals with the relation between preference and value, the second with various questions about the formal structure of good, and the concluding section with the value of life. This work is of interest and importance for both economists and philosophers, and shows powerfully how economic methods can contribute to moral philosophy. (shrink)
Rationality requires various things of you. For example, it requires you not to have contradictory beliefs, and to intend what you believe is a necessary means to an end that you intend. Suppose rationality requires you to F. Does this fact constitute a reason for you to F? Does it even follow from this fact that you have a reason to F? I examine these questions and reach a sceptical conclusion about them. I can find no satisfactory argument to show (...) that either has the answer ‘yes’. I consider the idea that rationality is normative for instrumental reasons, because it helps you to achieve some of the things you ought to achieve. I also consider the idea that rationality consists in responding correctly to reasons. I reject both. (shrink)
Some philosophers think that rationality consists in responding correctly to reasons, or alternatively in responding correctly to beliefs about reasons. This paper considers various possible interpretations of ‘responding correctly to reasons’ and of ‘responding correctly to beliefs about reasons’, and concludes that rationality consists in neither, under any interpretation. It recognizes that, under some interpretations, rationality does entail responding correctly to beliefs about reasons. That is: necessarily, if you are rational you respond correctly to your beliefs about reasons.
Practical reasoning is a process of reasoning that concludes in an intention. One example is reasoning from intending an end to intending what you believe is a necessary means: 'I will leave the next buoy to port; in order to do that I must tack; so I'll tack', where the first and third sentences express intentions and the second sentence a belief. This sort of practical reasoning is supported by a valid logical derivation, and therefore seems uncontrovertible. A more contentious (...) example is normative practical reasoning of the form 'I ought to φ, so I'll φ', where 'I ought to φ' expresses a normative belief and 'I'll φ' an intention. This has at least some characteristics of reasoning, but there are also grounds for doubting that it is genuine reasoning. One objection is that it seems inappropriate to derive an intention to φ from a belief that you ought to φ, rather than a belief that you ought to intend to φ. Another is that you may not be able to go through this putative process of reasoning, and this inability might disqualify it from being reasoning. A third objection is that it violates the Humean doctrine that reason alone cannot motivate any action of the will. This paper investigates these objections. (shrink)
I develop a scheme for the explanation of rational action. I start from a scheme that may be attributed to Thomas Nagel in The Possibility of Altruism , and develop it step by step to arrive at a sharper and more accurate scheme. The development includes a progressive refinement of the notion of motivation. I end by explaining the role of reasoning within the scheme.
The object of this paper is to explore the intersection of two issues – both of them of considerable interest in their own right. The first concerns the role that feasibility considerations play in constraining normative claims – claims, say, about what we (individually and collectively) ought to do and to be. This issue has particular relevance for the confrontation of moral philosophy with economics (and social science more generally). The second issue concerns whether normative claims are to be understood (...) as applying only to actions in their own right or (also) non-derivatively to attitudes. Both these issues are ones on which different theorists have taken quite different stands, though we think there is more to be said about them. The point of juxtaposing them lies in the thought that actions and attitudes may be subject to different feasibility constraints – and hence that how we conceive of the role of feasibility in an account of normativity will depend in part on how we conceive of the role of actions and attitudes in normative theorising. (shrink)
Most properties have comparatives, which are relations. For instance, the property of width has the comparative relation denoted by `_ is wider than _'. Let us say a property is reducible to its comparative if any statement that refers to the property has the same meaning as another statement that refers to the comparative instead. Width is not reducible to its comparative. To be sure, many statements that refer to width are reducible: for instance, `The Mississippi is wide' means the (...) same as `The Mississippi is wider than most rivers'. But some statements that refer to width are not reducible: for instance, `Electrons have zero width' is not. A property is not reducible to its comparative if it has absolute degrees, and specifically an absolute zero. A property's comparative relation places things in an order from those that have the property least to those that have it most. If there is an absolute zero point somewhere in this ordering, the property is not reducible to its comparative. For width, there is an absolute zero at one end of the ordering, so width is not reducible. The property of goodness is reducible to its comparative, betterness. In particular, there is no absolute zero of goodness. Things are ordered by betterness – some things are better than others – but nothing is absolutely good or absolutely bad. This is an exaggeration. In certain applications, goodness does have absolute degrees of a sort, and an absolute zero. For instance, it makes sense to say an event is good, and another event bad. These are absolute degrees of a sort, but they are themselves reducible to betterness. To say an event is good simply means the event is better than what would otherwise have happened. The goodness of lives has a different sort of absolute zero. Lives are ordered by betterness; some lives are better than others. We can make sense of the question `Where in this ordering is the division between lives that are good and those that are bad?' The division between good and bad lives is again reducible to betterness. To say a person's life is good means it is better that the person should continue living, rather than that she should die. Or it may mean it is better that her life should be lived rather than that it should never have been lived at all. As it is often put: a good life is a life worth living. Derek Parfit appears to attach a different sense to the idea of a good life. He appears to mean a life that contains a preponderance of good things (such as pleasure) over bad things (such as pain). If a life contains no good things and no bad things, it has zero goodness in this sense. In this sense, absolute goodness and the absolute zero of goodness are not reducible to betterness. However, this is a naturalistic sense of goodness, and it is subject to the open-question objection. If a life contains no good things and no bad things, it is an open question whether it has zero goodness. It might, for instance, be a bad thing that this life should be lived. In discussing the evil of death, some philosophers seem to have been searching for an absolute goodness that is not reducible to betterness. Thomas Nagel speaks of an asymmetry between what is good about life and what is bad about death. But if a person's life is good, that only means it would be better that she should continue living than that she should die. And if a person's death would be bad, that only means it would be worse that she should die than that she should continue living. So there can be no asymmetry. (shrink)
Reasoning is a process through which premise-attitudes give rise to a conclusion-attitude. When you reason actively you operate on the propositions that are the contents of your premise-attitudes, following a rule, to derive a new proposition that is the content of your conclusion-attitude. It may seem that, when you follow a rule, you must, at least implicitly, have the normative belief that you ought to comply with the rule, which guides you to comply. But I argue that to follow a (...) rule is to manifest a particular sort of disposition, which can be interpreted as an intention. An intention is itself a guiding disposition. It can guide you to comply with a rule, and no normative belief is required. (shrink)
Dorsey rejects Conclusion, so he believes he must reject one of the premises. He argues that the best option is to reject Premise 3. Rejecting Premise 3 entails a certain sort of discontinuity in value. So Dorsey believes he has an argument for discontinuity.
Two options are incommensurate in value if neither is better than the other, and if a small improvement or worsening of one does not necessarily make it determinately better or worse than the other. If a person faces a sequence of choices between incommensurate options, she may end up with a worse options than she could have had, even though none of her choices are irrational. Yet it seems that rationality should save her from this bad outcome. This is the (...) practical problem posed by incommensurability of values, and it may be solved by a new account of practical reasoning. (shrink)
This book chapter is not available in ORA, but you may download, display, print and reproduce this chapter in unaltered form only for your personal, non-commercial use or use within your organization from the ANU E Press website.
“Utility,” in plain English, means usefulness. In Australia, a ute is a useful vehicle. Jeremy Bentham specialized the meaning to a particular sort of usefulness. “By utility,” he said, “is meant that property in any object, whereby it tends to produce benefit, advantage, pleasure, good, or happiness or to prevent the happening of mischief, pain, evil, or unhappiness to the party whose interest is considered”. The “principle of utility” is the principle that actions are to be judged by their usefulness (...) in this sense: their tendency to produce benefit, advantage, pleasure, good, or happiness. When John Stuart Mill spoke of the “perfectly just conception of Utility or Happiness, considered as the directive rule of human conduct,” he was using “Utility” as a short name for this principle. “The greatest happiness principle” was another name for it. People who subscribed to this principle came to be known as utilitarians. (shrink)
The standard backward-induction reasoning in a game like the centipede assumes that the players maintain a common belief in rationality throughout the game. But that is a dubious assumption. Suppose the first player X didn't terminate the game in the first round; what would the second player Y think then? Since the backwards-induction argument says X should terminate the game, and it is supposed to be a sound argument, Y might be entitled to doubt X's rationality. Alternatively, Y might doubt (...) that X believes Y is rational, or that X believes Y believes X is rational, or Y might have some higher-order doubt. X’s deviant first move might cause a breakdown in common belief in rationality, therefore. Once that goes, the entire argument fails. The argument also assumes that the players act rationally at each stage of the game, even if this stage could not be reached by rational play. But it is also dubious to assume that past irrationality never exerts a corrupting influence on present play. However, the backwards-induction argument can be reconstructed for the centipede game on a more secure basis.1 It may be implausible to assume a common belief in rationality throughout the game, however the game might go, but the argument requires less than this. The standard idealisations in game theory certainly allow us to assume a common belief in rationality at the beginning of the game. They also allow us to assume this common belief persists so long as no one makes an irrational move. That is enough for the argument to go through. (shrink)