Humanity stands at a precipice. -/- Our species could survive for millions of generations — enough time to end disease, poverty, and injustice; to reach new heights of flourishing. But this vast future is at risk. With the advent of nuclear weapons, humanity entered a new age, gaining the power to destroy ourselves, without the wisdom to ensure we won’t. Since then, these dangers have only multiplied, from climate change to engineered pandemics and unaligned artificial intelligence. If we do not (...) act fast to reach a place of safety, it may soon be too late. -/- The Precipice explores the science behind the risks we face. It puts them in the context of the greater story of humanity: showing how ending these risks is among the most pressing moral issues of our time. And it points the way forward, to the actions and strategies we can take today to safeguard humanity’s future. (shrink)
How should we make decisions when we're uncertain about what we ought, morally, to do? Decision-making in the face of fundamental moral uncertainty is underexplored terrain: MacAskill, Bykvist, and Ord argue that there are distinctive norms by which it is governed, and which depend on the nature of one's moral beliefs.
The Repugnant Conclusion served an important purpose in catalyzing and inspiring the pioneering stage of population ethics research. We believe, however, that the Repugnant Conclusion now receives too much focus. Avoiding the Repugnant Conclusion should no longer be the central goal driving population ethics research, despite its importance to the fundamental accomplishments of the existing literature.
Suppose that we develop a medically safe and affordable means of enhancing human intelligence. For concreteness, we shall assume that the technology is genetic engineering (either somatic or germ line), although the argument we will present does not depend on the technological implementation. For simplicity, we shall speak of enhancing “intelligence” or “cognitive capacity,” but we do not presuppose that intelligence is best conceived of as a unitary attribute. Our considerations could be applied to speciﬁc cognitive abilities such as verbal (...) ﬂuency, memory, abstract reasoning, social intelligence, spatial cognition, numerical ability, or musical talent. It will emerge that the form of argument that we use can be applied much more generally to help assess other kinds of enhancement technologies as well as other kinds of reform. However, to give a detailed illustration of how the argument form works, we will focus on the prospect of cognitive enhancement. (shrink)
This paper argues in favor of a particular account of decision‐making under normative uncertainty: that, when it is possible to do so, one should maximize expected choice‐worthiness. Though this position has been often suggested in the literature and is often taken to be the ‘default’ view, it has so far received little in the way of positive argument in its favor. After dealing with some preliminaries and giving the basic motivation for taking normative uncertainty into account in our decision‐making, we (...) consider and provide new arguments against two rival accounts that have been offered—the accounts that we call ‘My Favorite Theory’ and ‘My Favorite Option’. We then give a novel argument for comparativism—the view that, under normative uncertainty, one should take into account both probabilities of different theories and magnitudes of choice‐worthiness. Finally, we further argue in favor of maximizing expected choice‐worthiness and consider and respond to five objections. (shrink)
It is often claimed that from the moment of conception embryos have the same moral status as adult humans. This claim plays a central role in many arguments against abortion, in vitro fertilization, and stem cell research. In what follows, I show that this claim leads directly to an unexpected and unwelcome conclusion: that natural embryo loss is one of the greatest problems of our time and that we must do almost everything in our power to prevent it. I examine (...) the responses available to those who hold that embryos have full moral status and conclude that they cannot avoid the force of this argument without giving up this key claim. (shrink)
A major problem for interpersonal aggregation is how to compare utility across individuals; a major problem for decision-making under normative uncertainty is the formally analogous problem of how to compare choice-worthiness across theories. We introduce and study a class of methods, which we call statistical normalization methods, for making interpersonal comparisons of utility and intertheoretic comparisons of choice-worthiness. We argue against the statistical normalization methods that have been proposed in the literature. We argue, instead, in favor of normalization of variance: (...) we claim that this is the account that most plausibly gives all individuals or theories ‘equal say’. To this end, we provide two proofs that variance normalization has desirable properties that all other normalization methods lack, though we also show how different assumptions could lead one to axiomatize alternative statistical normalization methods. (shrink)
Given the deep disagreement surrounding population axiology, one should remain uncertain about which theory is best. However, this uncertainty need not leave one neutral about which acts are better or worse. We show that, as the number of lives at stake grows, the Expected Moral Value approach to axiological uncertainty systematically pushes one toward choosing the option preferred by the Total View and critical-level views, even if one’s credence in those theories is low.
This report by the WHO Consultative Group on Equity and Universal Health Coverage addresses how countries can make fair progress towards the goal of universal coverage. It explains the relevant tradeoffs between different desirable ends and offers guidance on how to make these tradeoffs.
Prioritarianism is the moral view that a fixed improvement in someone's well-being matters more the worse off they are. Its supporters argue that it best captures our intuitions about unequal distributions of well-being. I show that prioritarianism sometimes recommends acts that will make things more unequal while simultaneously lowering the total well-being and making things worse for everyone ex ante. Intuitively, there is little to recommend such acts and I take this to be a serious counterexample for prioritarianism.
Given the deep disagreement surrounding population axiology, one should remain uncertain about which theory is best. However, this uncertainty need not leave one neutral about which acts are better or worse. We show that as the number of lives at stake grows, the Expected Moral Value approach to axiological uncertainty systematically pushes one towards choosing the option preferred by the Total and Critical Level views, even if one’s credence in those theories is low.
If people have different resources, tastes, or needs, they may be able to exchange goods or services such that they each feel they have been made better off. This is trade. If people have different moral views, then there is another type of trade that is possible: they can exchange goods or services such that both parties feel that the world is a better place or that their moral obligations are better satisfied. We can call this moral trade. I introduce (...) the idea of moral trade and explore several important theoretical and practical implications. (shrink)
The diagonal method is often used to show that Turing machines cannot solve their own halting problem. There have been several recent attempts to show that this method also exposes either contradiction or arbitrariness in other theoretical models of computation which claim to be able to solve the halting problem for Turing machines. We show that such arguments are flawed—a contradiction only occurs if a type of machine can compute its own diagonal function. We then demonstrate why such a situation (...) does not occur for the methods of hypercomputation under attack, and why it is unlikely to occur for any other serious methods. Introduction Issues with specific hypermachines Conclusions for hypercomputation. (shrink)
It is often said that there are three great traditions of normative ethics: consequentialism, deontology and virtue ethics. Each is based around a compelling intuition about the nature of ethics: that what is ultimately important is that we produce the best possible outcome, that ethics is a system of rules which govern our behaviour, and that ethics is about living a life that instantiates the virtues, such as honesty, compassion and loyalty. This essay is about how best to interpret consequentialism. (...) I show that if we take consequentialism beyond the assessment of acts, using a consequentialist criterion to assess decision making, motivation, and character, then the resulting theory can also capture many of the intuitions about systems of moral rules and excellences of character that lead people to deontology and virtue ethics. I begin by considering the argument that consequentialism is self-defeating because its adoption would produce bad outcomes. I take up the response offered by the classical utilitarians: when properly construed, consequentialism does not require us to make our decisions by a form of naïve calculation, or to be motivated purely by universal benevolence. Instead it requires us to use the decision procedure that will produce the best outcome and to have the motives that lead to the best outcome. I take this idea as my starting point, and spend the thesis developing it and considering its implications. I demonstrate that neither act-consequentialism nor rule-consequentialism has the resources to adequately assess decision making and motivation. I therefore turn to the idea of global consequentialism, which assesses everything in terms of its consequences. I then spend the greater part of the essay exploring how best to set up such a theory and how best to apply it to decision making and motivation. I overcome some important objections to the approach, and conclude by showing how the resulting approach to consequentialism helps to bridge the divide between the three traditions. (shrink)
In this report I provide an introduction to the burgeoning field of hypercomputation – the study of machines that can compute more than Turing machines. I take an extensive survey of many of the key concepts in the field, tying together the disparate ideas and presenting them in a structure which allows comparisons of the many approaches and results. To this I add several new results and draw out some interesting consequences of hypercomputation for several different disciplines.
Some risks have extremely high stakes. For example, a worldwide pandemic or asteroid impact could potentially kill more than a billion people. Comfortingly, scientific calculations often put very low probabilities on the occurrence of such catastrophes. In this paper, we argue that there are important new methodological problems which arise when assessing global catastrophic risks and we focus on a problem regarding probability estimation. When an expert provides a calculation of the probability of an outcome, they are really providing the (...) probability of the outcome occurring, given that their argument is watertight. However, their argument may fail for a number of reasons such as a flaw in the underlying theory, a flaw in the modeling of the problem, or a mistake in the calculations. If the probability estimate given by an argument is dwarfed by the chance that the argument itself is flawed, then the estimate is suspect. We develop this idea formally, explaining how it differs from the related distinctions of model and parameter uncertainty. Using the risk estimates from the Large Hadron Collider as a test case, we show how serious the problem can be when it comes to catastrophic risks and how best to address it. (shrink)
It is common to allocate scarce health care resources by maximizing QALYs per dollar. This approach has been attacked by disability-rights advocates, policy-makers, and ethicists on the grounds that it unjustly discriminates against the disabled. The main complaint is that the QALY-maximizing approach implies a seemingly unsatisfactory conclusion: other things being equal, we should direct life-saving treatment to the healthy rather than the disabled. This argument pays insufficient attention to the downsides of the potential alternatives. We show that this sort (...) of discrimination is one of four unpalatable consequences that any approach to priority setting in health care must face. We argue that, given the alternatives, it is far from clear that we should revise the QALY-maximizing approach in response to this objection. (shrink)
Over the last few decades, there has been an increasing interest in global consequentialism. Where act-consequentialism assesses acts in terms of their consequences, global consequentialism goes much further, assessing acts, rules, motives — and everything else — in terms of the relevant consequences. Compared to act-consequentialism it offers a number of advantages: it is more expressive, it is a simpler theory, and it captures some of the benefits of ruleconsequentialism without the corresponding drawbacks. In this paper, I explore the four (...) different approaches to global consequentialism made by Parfit, Pettit and Smith, Kagan, and Feldman. I break these up into their constituent components, demonstrating the space of possible global consequentialist theories, and I present two new theories within this space. (shrink)
La cobertura universal de salud está en el centro de la acción actual para fortalecer los sistemas de salud y mejorar el nivel y la distribución de la salud y los servicios de salud. Este documento es el informe fi nal del Grupo Consultivo de la OMS sobre la Equidad y Cobertura Universal de Salud. Aquí se abordan los temas clave de la justicia (fairness) y la equidad que surgen en el camino hacia la cobertura universal de salud. Por lo (...) tanto, el informe es pertinente para cada agente que infl uye en ese camino y en particular para los gobiernos, ya que se encargan de supervisar y guiar el progreso hacia la cobertura universal de salud. (shrink)
Consequentialism is often charged with being self-defeating, for if a person attempts to apply it, she may quite predictably produce worse outcomes than if she applied some other moral theory. Many consequentialists have replied that this criticism rests on a false assumption, confusing consequentialism’s criterion of the rightness of an act with its position on decision procedures. Consequentialism, on this view, does not dictate that we should be always calculating which of the available acts leads to the most good, but (...) instead advises us to decide what to do in whichever manner it is that will lead to the best outcome. Whilst it is typically afforded only a small note in any text on consequentialism, this reply has deep implications for the practical application of consequentialism, perhaps entailing that a consequentialist should eschew calculation altogether. (shrink)
– We present a new paradigm extending the Iterated Prisoner's Dilemma to multiple players. Our model is unique in granting players information about past interactions between all pairs of players – allowing for much more sophisticated social behaviour. We provide an overview of preliminary results and discuss the implications in terms of the evolutionary dynamics of strategies.
We present a new method for expressing Chaitin’s random real, Ω, through Diophantine equations. Where Chaitin’s method causes a particular quantity to express the bits of Ω by ﬂuctuating between ﬁnite and inﬁnite values, in our method this quantity is always ﬁnite and the bits of Ω are expressed in its ﬂuctuations between odd and even values, allowing for some interesting developments. We then use exponential Diophantine equations to simplify this result and ﬁnally show how both methods can also be (...) used to create polynomials which express the bits of Ω in the number of positive values they assume. (shrink)
Many of the commentaries have made similar points regarding the nature of full moral status, so I shall begin by addressing these together. They argue that my representation of the Claim is stronger than many proponents of full moral status would accept (Ord 2008). Robert Card (2008) says that I assume that it is equally bad to lose human life at all stages. Russell DiSilvestro (2008) says that I assume a flawed principle that he calls (M). Marianne Burda (2008) says (...) that I assume that life must be saved or prolonged at all costs. Christopher Dodsworth and colleagues (2008) say that I assume embryos have as much to lose as adults. I assume none of these things. The argument I put forward works just as well for more subdued claims about the moral status of the embryo. All that is required is to find the badness of embryo death to be at least roughly comparable to the badness of adult death, so that when a proponent of full moral status hears that 30 times more of our moral equals die of spontaneous abortion than die of cancer, their views would require urgent action if such action is possible. The comparison between the badness of adult death and of fetal or embryonic death is made routinely in the literature in support of restrictions upon abortion, in vitro fertilization (IVF) and stem cell research, and it appears to be a mainstream view worthy of serious attention.1 If a large proportion of those who claim that the embryo has full moral status are none-the-less quite sure that each embryo death is much less bad than an adult death, then they owe it to their readers to be more clear about this. Let us now consider the other points of each commentary in turn. (shrink)
While it is well known that a Turing machine equipped with the ability to flip a fair coin cannot compute more than a standard Turing machine, we show that this is not true for a biased coin. Indeed, any oracle set X may be coded as a probability pX such that if a Turing machine is given a coin which lands heads with probability pX it can compute any function recursive in X with arbitrarily high probability. We also show how (...) the assumption of a non-recursive bias can be weakened by using a.. (shrink)
We show how to determine the k-th bit of Chaitin’s algorithmically random real number Ω by solving k instances of the halting problem. From this we then reduce the problem of determining the k-th bit of Ω to determining whether a certain Diophantine equation with two parameters, k and N , has solutions for an odd or an even number of values of N . We also demonstrate two further examples of Ω in number theory: an exponential Diophantine equation with (...) a parameter k which has an odd number of solutions iﬀ the k-th bit of Ω is 1, and a polynomial of positive integer variables and a parameter k that takes on an odd number of positive values iﬀ the k-th bit of Ω is 1. (shrink)