Human beings are a marvel of evolved complexity. Such systems can be difficult to enhance. When we manipulate complex evolved systems which are poorly understood, our interventions often fail or backfire. It can appear as if there is a “wisdom of nature” which we ignore at our peril. Sometimes the belief in nature’s wisdom – and corresponding doubts about the prudence of tampering with nature, especially human nature – manifest as diffusely moral objections against enhancement. Such objections may be expressed (...) as intuitions about the superiority of the natural or the troublesomeness of hubris, or as an evaluative bias in favor of the status quo. This paper explores the extent to which such prudence‐derived anti‐enhancement sentiments are justified. We develop a heuristic, inspired by the field of evolutionary medicine, for identifying promising human enhancement interventions. The heuristic incorporates the grains of truth contained in “nature knows best” attitudes while providing criteria for the special cases where we have reason to believe that it is feasible for us to improve on nature. (shrink)
Some risks have extremely high stakes. For example, a worldwide pandemic or asteroid impact could potentially kill more than a billion people. Comfortingly, scientific calculations often put very low probabilities on the occurrence of such catastrophes. In this paper, we argue that there are important new methodological problems which arise when assessing global catastrophic risks and we focus on a problem regarding probability estimation. When an expert provides a calculation of the probability of an outcome, they are really providing the (...) probability of the outcome occurring, given that their argument is watertight. However, their argument may fail for a number of reasons such as a flaw in the underlying theory, a flaw in the modeling of the problem, or a mistake in the calculations. If the probability estimate given by an argument is dwarfed by the chance that the argument itself is flawed, then the estimate is suspect. We develop this idea formally, explaining how it differs from the related distinctions of model and parameter uncertainty. Using the risk estimates from the Large Hadron Collider as a test case, we show how serious the problem can be when it comes to catastrophic risks and how best to address it. (shrink)
?Love hurts??as the saying goes?and a certain amount of pain and difficulty in intimate relationships is unavoidable. Sometimes it may even be beneficial, since adversity can lead to personal growth, self-discovery, and a range of other components of a life well-lived. But other times, love can be downright dangerous. It may bind a spouse to her domestic abuser, draw an unscrupulous adult toward sexual involvement with a child, put someone under the insidious spell of a cult leader, and even inspire (...) jealousy-fueled homicide. How might these perilous devotions be diminished? The ancients thought that treatments such as phlebotomy, exercise, or bloodletting could ?cure? an individual of love. But modern neuroscience and emerging developments in psychopharmacology open up a range of possible interventions that might actually work. These developments raise profound moral questions about the potential uses?and misuses?of such anti-love biotechnology. In this article, we describe a number of prospective love-diminishing interventions, and offer a preliminary ethical framework for dealing with them responsibly should they arise. (shrink)
There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in (...) the world except by answering questions. Even this narrow approach presents considerable challenges. In this paper, we analyse and critique various methods of controlling the AI. In general an Oracle AI might be safer than unrestricted AI, but still remains potentially dangerous. (shrink)
We argue that the fragility of contemporary marriages—and the corresponding high rates of divorce—can be explained (in large part) by a three-part mismatch: between our relationship values, our evolved psychobiological natures, and our modern social, physical, and technological environment. “Love drugs” could help address this mismatch by boosting our psychobiologies while keeping our values and our environment intact. While individual couples should be free to use pharmacological interventions to sustain and improve their romantic connection, we suggest that they may have (...) an obligation to do so as well, in certain cases. Specifically, we argue that couples with offspring may have a special responsibility to enhance their relationships for the sake of their children. We outline an evolutionarily informed research program for identifying promising biomedical enhancements of love and commitment. (shrink)
Anthropogenic climate change is arguably one of the biggest problems that confront us today. There is ample evidence that climate change is likely to affect adversely many aspects of life for all people around the world, and that existing solutions such as geoengineering might be too risky and ordinary behavioural and market solutions might not be sufficient to mitigate climate change. In this paper, we consider a new kind of solution to climate change, what we call human engineering, which involves (...) biomedical modifications of humans so that they can mitigate and/or adapt to climate change. We argue that human engineering is potentially less risky than geoengineering and that it could help behavioural and market solutions succeed in mitigating climate change. We also consider some possible ethical concerns regarding human engineering such as its safety, the implications of human engineering for our children and society, and we argue that these concerns can be addressed. Our upshot is that human engineering deserves further consideration in the debate about climate change. (shrink)
The prospect of using memory modifying technologies raises interesting and important normative concerns. We first point out that those developing desirable memory modifying technologies should keep in mind certain technical and user-limitation issues. We next discuss certain normative issues that the use of these technologies can raise such as truthfulness, appropriate moral reaction, self-knowledge, agency, and moral obligations. Finally, we propose that as long as individuals using these technologies do not harm others and themselves in certain ways, and as long (...) as there is no prima facie duty to retain particular memories, it is up to individuals to determine the permissibility of particular uses of these technologies. (shrink)
This paper reviews the evolutionary history and biology of love and marriage. It examines the current and imminent possibilities of biological manipulation of lust, attraction and attachment, so called neuroenhancement of love. We examine the arguments for and against these biological interventions to influence love. We argue that biological interventions offer an important adjunct to psychosocial interventions, especially given the biological limitations inherent in human love.