In September 2015 a well-publicised Campaign Against Sex Robots (CASR) was launched. Modelled on the longer-standing Campaign to Stop Killer Robots, the CASR opposes the development of sex robots on the grounds that the technology is being developed with a particular model of female-male relations (the prostitute-john model) in mind, and that this will prove harmful in various ways. In this chapter, we consider carefully the merits of campaigning against such a technology. We make three main arguments. First, we argue (...) that the particular claims advanced by the CASR are unpersuasive, partly due to a lack of clarity about the campaign’s aims and partly due to substantive defects in the main ethical objections put forward by campaign’s founder(s). Second, broadening our inquiry beyond the arguments proferred by the campaign itself, we argue that it would be very difficult to endorse a general campaign against sex robots unless one embraced a highly conservative attitude towards the ethics of sex, which is likely to be unpalatable to those who are active in the campaign. In making this argument we draw upon lessons from the campaign against killer robots. Finally, we conclude by suggesting that although a generalised campaign against sex robots is unwarranted, there are legitimate concerns that one can raise about the development of sex robots. (shrink)
?Love hurts??as the saying goes?and a certain amount of pain and difficulty in intimate relationships is unavoidable. Sometimes it may even be beneficial, since adversity can lead to personal growth, self-discovery, and a range of other components of a life well-lived. But other times, love can be downright dangerous. It may bind a spouse to her domestic abuser, draw an unscrupulous adult toward sexual involvement with a child, put someone under the insidious spell of a cult leader, and even inspire (...) jealousy-fueled homicide. How might these perilous devotions be diminished? The ancients thought that treatments such as phlebotomy, exercise, or bloodletting could ?cure? an individual of love. But modern neuroscience and emerging developments in psychopharmacology open up a range of possible interventions that might actually work. These developments raise profound moral questions about the potential uses?and misuses?of such anti-love biotechnology. In this article, we describe a number of prospective love-diminishing interventions, and offer a preliminary ethical framework for dealing with them responsibly should they arise. (shrink)
Pharmaceuticals or other emerging technologies could be used to enhance (or diminish) feelings of lust, attraction, and attachment in adult romantic partnerships. While such interventions could conceivably be used to promote individual (and couple) well-being, their widespread development and/or adoption might lead to “medicalization” of human love and heartache—for some, a source of serious concern. In this essay, we argue that the “medicalization of love” need not necessarily be problematic, on balance, but could plausibly be expected to have either good (...) or bad consequences depending upon how it unfolds. By anticipating some of the specific ways in which these technologies could yield unwanted outcomes, bioethicists and others can help direct the course of love’s “medicalization”—should it happen to occur—more toward the “good” side than the “bad.”. (shrink)
We argue that the fragility of contemporary marriages—and the corresponding high rates of divorce—can be explained (in large part) by a three-part mismatch: between our relationship values, our evolved psychobiological natures, and our modern social, physical, and technological environment. “Love drugs” could help address this mismatch by boosting our psychobiologies while keeping our values and our environment intact. While individual couples should be free to use pharmacological interventions to sustain and improve their romantic connection, we suggest that they may have (...) an obligation to do so as well, in certain cases. Specifically, we argue that couples with offspring may have a special responsibility to enhance their relationships for the sake of their children. We outline an evolutionarily informed research program for identifying promising biomedical enhancements of love and commitment. (shrink)
The enhancement debate in neuroscience and biomedical ethics tends to focus on the augmentation of certain capacities or functions: memory, learning, attention, and the like. Typically, the point of contention is whether these augmentative enhancements should be considered permissible for individuals with no particular “medical” disadvantage along any of the dimensions of interest. Less frequently addressed in the literature, however, is the fact that sometimes the _diminishment_ of a capacity or function, under the right set of circumstances, could plausibly contribute (...) to an individual's overall well-being: more is not always better, and sometimes less is more. Such cases may be especially likely, we suggest, when trade-offs in our modern environment have shifted since the environment of evolutionary adaptation. In this article, we introduce the notion of “diminishment as enhancement” and go on to defend a _welfarist_ conception of enhancement. We show how this conception resolves a number of definitional ambiguities in the enhancement literature, and we suggest that it can provide a useful framework for thinking about the use of emerging neurotechnologies to promote human flourishing. (shrink)
This paper reviews the evolutionary history and biology of love and marriage. It examines the current and imminent possibilities of biological manipulation of lust, attraction and attachment, so called neuroenhancement of love. We examine the arguments for and against these biological interventions to influence love. We argue that biological interventions offer an important adjunct to psychosocial interventions, especially given the biological limitations inherent in human love.
The prospect of using memory modifying technologies raises interesting and important normative concerns. We first point out that those developing desirable memory modifying technologies should keep in mind certain technical and user-limitation issues. We next discuss certain normative issues that the use of these technologies can raise such as truthfulness, appropriate moral reaction, self-knowledge, agency, and moral obligations. Finally, we propose that as long as individuals using these technologies do not harm others and themselves in certain ways, and as long (...) as there is no prima facie duty to retain particular memories, it is up to individuals to determine the permissibility of particular uses of these technologies. (shrink)
Anthropogenic climate change is arguably one of the biggest problems that confront us today. There is ample evidence that climate change is likely to affect adversely many aspects of life for all people around the world, and that existing solutions such as geoengineering might be too risky and ordinary behavioural and market solutions might not be sufficient to mitigate climate change. In this paper, we consider a new kind of solution to climate change, what we call human engineering, which involves (...) biomedical modifications of humans so that they can mitigate and/or adapt to climate change. We argue that human engineering is potentially less risky than geoengineering and that it could help behavioural and market solutions succeed in mitigating climate change. We also consider some possible ethical concerns regarding human engineering such as its safety, the implications of human engineering for our children and society, and we argue that these concerns can be addressed. Our upshot is that human engineering deserves further consideration in the debate about climate change. (shrink)
There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in (...) the world except by answering questions. Even this narrow approach presents considerable challenges. In this paper, we analyse and critique various methods of controlling the AI. In general an Oracle AI might be safer than unrestricted AI, but still remains potentially dangerous. (shrink)
In some situations a number of agents each have the ability to undertake an initiative that would have significant effects on the others. Suppose that each of these agents is purely motivated by an altruistic concern for the common good. We show that if each agent acts on her own personal judgment as to whether the initiative should be undertaken, then the initiative will be undertaken more often than is optimal. We suggest that this phenomenon, which we call the unilateralist’s (...) curse, arises in many contexts, including some that are important for public policy. To lift the curse, we propose a principle of conformity, which would discourage unilateralist action. We consider three different models for how this principle could be implemented, and respond to an objection that could be raised against it. (shrink)
We describe a significant practical consequence of taking anthropic biases into account in deriving predictions for rare stochastic catastrophic events. The risks associated with catastrophes such as asteroidal/cometary impacts, supervolcanic episodes, and explosions of supernovae/gamma-ray bursts are based on their observed frequencies. As a result, the frequencies of catastrophes that destroy or are otherwise incompatible with the existence of observers are systematically underestimated. We describe the consequences of the anthropic bias for estimation of catastrophic risks, and suggest some directions for (...) future work. (shrink)
In this article we discuss the moral and legal aspects of causing the death of a terminal patient in the hope of extending their life in the future. We call this theoretical procedure cryothanasia. We argue that administering cryothanasia is ethically different from administering euthanasia. Consequently, objections to euthanasia should not apply to cryothanasia, and cryothanasia could also be considered a legal option where euthanasia is illegal.
Some risks have extremely high stakes. For example, a worldwide pandemic or asteroid impact could potentially kill more than a billion people. Comfortingly, scientific calculations often put very low probabilities on the occurrence of such catastrophes. In this paper, we argue that there are important new methodological problems which arise when assessing global catastrophic risks and we focus on a problem regarding probability estimation. When an expert provides a calculation of the probability of an outcome, they are really providing the (...) probability of the outcome occurring, given that their argument is watertight. However, their argument may fail for a number of reasons such as a flaw in the underlying theory, a flaw in the modeling of the problem, or a mistake in the calculations. If the probability estimate given by an argument is dwarfed by the chance that the argument itself is flawed, then the estimate is suspect. We develop this idea formally, explaining how it differs from the related distinctions of model and parameter uncertainty. Using the risk estimates from the Large Hadron Collider as a test case, we show how serious the problem can be when it comes to catastrophic risks and how best to address it. (shrink)
Designer Biology: The Ethics of Intensively Engineering Biological and Ecological Systems consists of thirteen chapters that address the ethical issues raised by technological intervention and design across a broad range of biological and ecological systems. Among the technologies addressed are geoengineering, human enhancement, sex selection, genetic modification, and synthetic biology.