“Love hurts”—as the saying goes—and a certain amount of pain and difficulty in intimate relationships is unavoidable. Sometimes it may even be beneficial, since adversity can lead to personal growth, self-discovery, and a range of other components of a life well-lived. But other times, love can be downright dangerous. It may bind a spouse to her domestic abuser, draw an unscrupulous adult toward sexual involvement with a child, put someone under the insidious spell of a cult leader, and even inspire (...) jealousy-fueled homicide. How might these perilous devotions be diminished? The ancients thought that treatments such as phlebotomy, exercise, or bloodletting could “cure” an individual of love. But modern neuroscience and emerging developments in psychopharmacology open up a range of possible interventions that might actually work. These developments raise profound moral questions about the potential uses—and misuses—of such anti-love biotechnology. In this article, we describe a number of prospective love-diminishing interventions, and offer a preliminary ethical framework for dealing with them responsibly should they arise. (shrink)
Pharmaceuticals or other emerging technologies could be used to enhance (or diminish) feelings of lust, attraction, and attachment in adult romantic partnerships. While such interventions could conceivably be used to promote individual (and couple) well-being, their widespread development and/or adoption might lead to “medicalization” of human love and heartache—for some, a source of serious concern. In this essay, we argue that the “medicalization of love” need not necessarily be problematic, on balance, but could plausibly be expected to have either good (...) or bad consequences depending upon how it unfolds. By anticipating some of the specific ways in which these technologies could yield unwanted outcomes, bioethicists and others can help direct the course of love’s “medicalization”—should it happen to occur—more toward the “good” side than the “bad.”. (shrink)
The enhancement debate in neuroscience and biomedical ethics tends to focus on the augmentation of certain capacities or functions: memory, learning, attention, and the like. Typically, the point of contention is whether these augmentative enhancements should be considered permissible for individuals with no particular “medical” disadvantage along any of the dimensions of interest. Less frequently addressed in the literature, however, is the fact that sometimes the _diminishment_ of a capacity or function, under the right set of circumstances, could plausibly contribute (...) to an individual's overall well-being: more is not always better, and sometimes less is more. Such cases may be especially likely, we suggest, when trade-offs in our modern environment have shifted since the environment of evolutionary adaptation. In this article, we introduce the notion of “diminishment as enhancement” and go on to defend a _welfarist_ conception of enhancement. We show how this conception resolves a number of definitional ambiguities in the enhancement literature, and we suggest that it can provide a useful framework for thinking about the use of emerging neurotechnologies to promote human flourishing. (shrink)
We argue that the fragility of contemporary marriages—and the corresponding high rates of divorce—can be explained (in large part) by a three-part mismatch: between our relationship values, our evolved psychobiological natures, and our modern social, physical, and technological environment. “Love drugs” could help address this mismatch by boosting our psychobiologies while keeping our values and our environment intact. While individual couples should be free to use pharmacological interventions to sustain and improve their romantic connection, we suggest that they may have (...) an obligation to do so as well, in certain cases. Specifically, we argue that couples with offspring may have a special responsibility to enhance their relationships for the sake of their children. We outline an evolutionarily informed research program for identifying promising biomedical enhancements of love and commitment. (shrink)
Purpose This paper aims to formalize long-term trajectories of human civilization as a scientific and ethical field of study. The long-term trajectory of human civilization can be defined as the path that human civilization takes during the entire future time period in which human civilization could continue to exist. -/- Design/methodology/approach This paper focuses on four types of trajectories: status quo trajectories, in which human civilization persists in a state broadly similar to its current state into the distant future; catastrophe (...) trajectories, in which one or more events cause significant harm to human civilization; technological transformation trajectories, in which radical technological breakthroughs put human civilization on a fundamentally different course; and astronomical trajectories, in which human civilization expands beyond its home planet and into the accessible portions of the cosmos. -/- Findings Status quo trajectories appear unlikely to persist into the distant future, especially in light of long-term astronomical processes. Several catastrophe, technological transformation and astronomical trajectories appear possible. -/- Originality/value Some current actions may be able to affect the long-term trajectory. Whether these actions should be pursued depends on a mix of empirical and ethical factors. For some ethical frameworks, these actions may be especially important to pursue. (shrink)
This paper reviews the evolutionary history and biology of love and marriage. It examines the current and imminent possibilities of biological manipulation of lust, attraction and attachment, so called neuroenhancement of love. We examine the arguments for and against these biological interventions to influence love. We argue that biological interventions offer an important adjunct to psychosocial interventions, especially given the biological limitations inherent in human love.
The possibility of enhancing human abilities often raises public concern about equality and social impact. This chapter aims at one particular group of technologies, cognitive enhancement, and one particular fear, that enhancement will create social divisions and possibly expanding inequalities. The chapter argues that cognitive enhancements could offer significant social and economic benefits. The basic forms of internal cognitive enhancement technologies foreseen today are pharmacological modifications, genetic interventions, transcranial magnetic stimulation, and neural implants. Cognitive enhancements can influence the economy through (...) reduction of losses, individual economic benefits, and society‐wide benefits. The strongest objection to the introduction of any enhancement technology is that it will create inequality, injustice, and unfairness. While there are clear economic and social benefits to cognitive enhancement, there exist anumber of obstacles to its development and use. One obstacle is the present system for licensing drugs and medical treatments. (shrink)
The prospect of using memory modifying technologies raises interesting and important normative concerns. We first point out that those developing desirable memory modifying technologies should keep in mind certain technical and user-limitation issues. We next discuss certain normative issues that the use of these technologies can raise such as truthfulness, appropriate moral reaction, self-knowledge, agency, and moral obligations. Finally, we propose that as long as individuals using these technologies do not harm others and themselves in certain ways, and as long (...) as there is no prima facie duty to retain particular memories, it is up to individuals to determine the permissibility of particular uses of these technologies. (shrink)
Anthropogenic climate change is arguably one of the biggest problems that confront us today. There is ample evidence that climate change is likely to affect adversely many aspects of life for all people around the world, and that existing solutions such as geoengineering might be too risky and ordinary behavioural and market solutions might not be sufficient to mitigate climate change. In this paper, we consider a new kind of solution to climate change, what we call human engineering, which involves (...) biomedical modifications of humans so that they can mitigate and/or adapt to climate change. We argue that human engineering is potentially less risky than geoengineering and that it could help behavioural and market solutions succeed in mitigating climate change. We also consider some possible ethical concerns regarding human engineering such as its safety, the implications of human engineering for our children and society, and we argue that these concerns can be addressed. Our upshot is that human engineering deserves further consideration in the debate about climate change. (shrink)
In September 2015 a well-publicised Campaign Against Sex Robots (CASR) was launched. Modelled on the longer-standing Campaign to Stop Killer Robots, the CASR opposes the development of sex robots on the grounds that the technology is being developed with a particular model of female-male relations (the prostitute-john model) in mind, and that this will prove harmful in various ways. In this chapter, we consider carefully the merits of campaigning against such a technology. We make three main arguments. First, we argue (...) that the particular claims advanced by the CASR are unpersuasive, partly due to a lack of clarity about the campaign’s aims and partly due to substantive defects in the main ethical objections put forward by campaign’s founder(s). Second, broadening our inquiry beyond the arguments proferred by the campaign itself, we argue that it would be very difficult to endorse a general campaign against sex robots unless one embraced a highly conservative attitude towards the ethics of sex, which is likely to be unpalatable to those who are active in the campaign. In making this argument we draw upon lessons from the campaign against killer robots. Finally, we conclude by suggesting that although a generalised campaign against sex robots is unwarranted, there are legitimate concerns that one can raise about the development of sex robots. (shrink)
Human beings are a marvel of evolved complexity. Such systems can be difficult to enhance. When we manipulate complex evolved systems, which are poorly understood, our interventions often fail or backfire. It can appear as if there is a “wisdom of nature” which we ignore at our peril. Sometimes the belief in nature’s wisdom—and corresponding doubts about the prudence of tampering with nature, especially human nature—manifests as diffusely moral objections against enhancement. Such objections may be expressed as intuitions about the (...) superiority of the natural or the troublesomeness of hubris or as an evaluative bias in favor of the status quo. This chapter explores the extent to which such prudence-derived anti-enhancement sentiments are justified. We develop a heuristic, inspired by the field of evolutionary medicine, for identifying promising human enhancement interventions. The heuristic incorporates the grains of truth contained in “nature knows best” attitudes while providing criteria for the special cases where we have reason to believe that it is feasible for us to improve on nature. (shrink)
There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in (...) the world except by answering questions. Even this narrow approach presents considerable challenges. In this paper, we analyse and critique various methods of controlling the AI. In general an Oracle AI might be safer than unrestricted AI, but still remains potentially dangerous. (shrink)
As cognitive neuroscience has advanced, the list of prospective internal, biological enhancements has steadily expanded. Education and training, as well as the use of external information‐processing devices, may be labeled as “conventional” means of cognition enhancement (CE). They are often well established and culturally accepted. By contrast, methods of enhancing cognition through “unconventional” means, such as ones involving deliberately created nootropic drugs, gene therapy, or neural implants, are nearly all to be regarded as experimental at the present time. Transcranial magnetic (...) stimulation (TMS), genetic interventions, brain‐computer interfaces and new senses are highly experimental and unlikely to be important over the next 15 years. Many of the concerns about enhancement are nonspecific to the tools used to achieve it, which means that enhancement–ethic scrutiny should also apply to nonbiological external enhancements. (shrink)
'Brainjacking’ refers to the exercise of unauthorized control of another’s electronic brain implant. Whilst the possibility of hacking a Brain–Computer Interface (BCI) has already been proven in both experimental and real-life settings, there is reason to believe that it will soon be possible to interfere with the software settings of the Implanted Pulse Generators (IPGs) that play a central role in Deep Brain Stimulation (DBS) systems. Whilst brainjacking raises ethical concerns pertaining to privacy and physical or psychological harm, we claim (...) that the possibility of brainjacking DBS raises particularly profound concerns about individual autonomy, since the possibility of hacking such devices raises the prospect of third parties exerting influence over the neural circuits underpinning the subject’s cognitive, emotional and motivational states. However, although it seems natural to assume that brainjacking represents a profound threat to individual autonomy, we suggest that the implications of brainjacking for individual autonomy are complicated by the fact that technologies targeted by brainjacking often serve to enhance certain aspects of the user’s autonomy. The difficulty of ascertaining the implications of brainjacking DBS for individual autonomy is exacerbated by the varied understandings of autonomy in the neuroethical and philosophical literature. In this paper, we seek to bring some conceptual clarity to this area by mapping out some of the prominent views concerning the different dimension of autonomous agency, and the implications of brainjacking DBS for each dimension. Drawing on three hypothetical case studies, we show that there could plausibly be some circumstances in which brainjacking could potentially be carried out in ways that could serve to enhance certain dimensions of the target’s autonomy. Our analysis raises further questions about the power, scope, and necessity of obtaining prior consent in seeking to protect patient autonomy when directly interfering with their neural states, in particular in the context of self-regulating closed-loop stimulation devices. (shrink)
In this article we discuss the moral and legal aspects of causing the death of a terminal patient in the hope of extending their life in the future. We call this theoretical procedure cryothanasia. We argue that administering cryothanasia is ethically different from administering euthanasia. Consequently, objections to euthanasia should not apply to cryothanasia, and cryothanasia could also be considered a legal option where euthanasia is illegal.
In some situations a number of agents each have the ability to undertake an initiative that would have significant effects on the others. Suppose that each of these agents is purely motivated by an altruistic concern for the common good. We show that if each agent acts on her own personal judgment as to whether the initiative should be undertaken, then the initiative will be undertaken more often than is optimal. We suggest that this phenomenon, which we call the unilateralist’s (...) curse, arises in many contexts, including some that are important for public policy. To lift the curse, we propose a principle of conformity, which would discourage unilateralist action. We consider three different models for how this principle could be implemented, and respond to an objection that could be raised against it. (shrink)
Over the years, I have lectured about various enhancements and modifications of the human body; now I am going to deal more with the whys than the hows. I am hoping to demonstrate why the freedom to modify one's body is essential not just to transhumanism, but also to any future democratic society.
How individuals tend to evaluate the combination of their own and other’s payoffs—social value orientations—is likely to be a potential target of future moral enhancers. However, the stability of cooperation in human societies has been buttressed by evolved mildly prosocial orientations. If they could be changed, would this destabilize the cooperative structure of society? We simulate a model of moral enhancement in which agents play games with each other and can enhance their orientations based on maximizing personal satisfaction. We find (...) that given the assumption that very low payoffs lead agents to be removed from the population, there is a broadly stable prosocial attractor state. However, the balance between prosociality and individual payoff-maximization is affected by different factors. Agents maximizing their own satisfaction can produce emergent shifts in society that reduce everybody’s satisfaction. Moral enhancement considerations should take the issues of social emergence into account. (shrink)
Some risks have extremely high stakes. For example, a worldwide pandemic or asteroid impact could potentially kill more than a billion people. Comfortingly, scientific calculations often put very low probabilities on the occurrence of such catastrophes. In this paper, we argue that there are important new methodological problems which arise when assessing global catastrophic risks and we focus on a problem regarding probability estimation. When an expert provides a calculation of the probability of an outcome, they are really providing the (...) probability of the outcome occurring, given that their argument is watertight. However, their argument may fail for a number of reasons such as a flaw in the underlying theory, a flaw in the modeling of the problem, or a mistake in the calculations. If the probability estimate given by an argument is dwarfed by the chance that the argument itself is flawed, then the estimate is suspect. We develop this idea formally, explaining how it differs from the related distinctions of model and parameter uncertainty. Using the risk estimates from the Large Hadron Collider as a test case, we show how serious the problem can be when it comes to catastrophic risks and how best to address it. (shrink)
How much value can our decisions create? We argue that unless our current understanding of physics is wrong in fairly fundamental ways, there exists an upper limit of value relevant to our decisions. First, due to the speed of light and the definition and conception of economic growth, the limit to economic growth is a restrictive one. Additionally, a related far larger but still finite limit exists for value in a much broader sense due to the physics of information and (...) the ability of physical beings to place value on outcomes. We discuss how this argument can handle lexicographic preferences, probabilities, and the implications for infinite ethics and ethical uncertainty. (shrink)
We describe a significant practical consequence of taking anthropic biases into account in deriving predictions for rare stochastic catastrophic events. The risks associated with catastrophes such as asteroidal/cometary impacts, supervolcanic episodes, and explosions of supernovae/gamma-ray bursts are based on their observed frequencies. As a result, the frequencies of catastrophes that destroy or are otherwise incompatible with the existence of observers are systematically underestimated. We describe the consequences of the anthropic bias for estimation of catastrophic risks, and suggest some directions for (...) future work. (shrink)
Human beings are a marvel of evolved complexity. When we try to enhance poorly-understood complex evolved systems, our interventions often fail or backfire. It can appear as if there is a “wisdom of nature” which we ignore at our peril. A recognition of this reality can manifest as a vaguely normative intuition, to the effect that it is “hubristic” to try to improve on nature, or that biomedical therapy is ok while enhancement is morally suspect. We suggest that one root (...) of these moral intuitions may be fundamentally prudential rather than ethical. More importantly, we develop a practical heuristic, the “evolutionary optimality challenge”, for evaluating the plausibility that specific candidate biomedical interventions would be safe and effective. This heuristic recognizes the grain of truth contained in “nature knows best” attitudes while providing criteria for identifying the special cases where it may be feasible, with present or near-future technology, to enhance human nature. (shrink)
Current and future possibilities for enhancing human physical ability, cognition, mood, and lifespan raise the ethical question of whether we should enhance normal human capacities in these ways. This chapter offers such an account of enhancement. It begins by reviewing a number of suggested accounts of enhancement, and points to their shortcomings. The chapter then identifies two key senses of “enhancement”: functional enhancement, the enhancement of some capacity or power (e.g. vision, intelligence, health) and human enhancement, the enhancement of a (...) human being's life. The latter notion is the notion of enhancement most relevant to ethical debate. The chapter argues that it is best understood in welfarist terms. It illustrates this welfarist approach to enhancement by applying it to the case of cognitive enhancement. Unlike the sociological pragmatic and functional approaches, the welfarist account is inherently normative. It ties enhancement to the value of well‐being. (shrink)
The biosphere represents the global sum of all ecosystems. According to a prominent view in environmental ethics, ecocentrism, these ecosystems matter for their own sake, and not only because they contribute to human ends. As such, some ecocentrists are critical of the modern industrial civilization, and a few even argue that an irreversible collapse of the modern industrial civilization would be a good thing. However, taking a longer view and considering the eventual destruction of the biosphere by astronomical processes, we (...) argue that humans, a species with considerable technological know-how and industrial capacity could intervene to extend the lifespan of Earth’s biosphere, perhaps by several billion years. We argue that human civilization, despite its flaws and harmful impacts on many ecosystems, is the biosphere’s best hope of avoiding premature destruction. We argue that proponents of ecocentrism, even those who wholly disregard anthropocentric values, have a strong moral reason preserve the modern industrial civilization, for as long as needed to ensure biosphere survival. (shrink)
miembro del Future of Humanity Institute de la Universidad de Oxford y experto en mejoramiento humano y transhumanismo, sobre cuestiones centrales de su labor investigadora.PALABRAS CLAVETRANSHUMANISMO, MEJORAMIENTO HUMANO, ANDERS SANDBERG, BIOTECNOLOGÍAABSTRACTInterview with Anders Sandberg, member of the Future of Humanity Institute at Oxford University and expert in human enhancement and transhumanism, about central topics in his works.KEYWORDSTRANSHUMANISM, HUMAN ENHANCEMENT, ANDERS SANDBERG,BIOTECHNOLOGY.
This essay reviews different definitions and models of technological singularity. The models range from conceptual sketches to detailed endogenous growth models, as well as attempts to fit empirical data to quantitative models.
A brain emulation would be a one‐to‐one simulation where every causal process in the brain is represented, behaving in the same way as the original. Opponents of animal testing often argue that much of it is unnecessary and could be replaced with simulations. Personal identity is going to be a major issue with brain emulations, both because of the transition from an original unproblematic single human identity to successor identity/identities that might or might not be the same, and because software (...) minds can potentially have multiple realizability. The process might produce distressed minds that have rights yet have an existence not worth living, or that lack the capacity to form or express their wishes. Emulations will experience and behave on a timescale set by the speed of their software. Brain emulations would not be self‐contained, and their survival would depend upon hardware over which they might not have any control. (shrink)
Vernor Vinge's “singularity” is a worthy contribution to the long tradition of contemplations about human transcendence. Throughout history, most of these musings have dwelled upon the spiritual – the notion that human beings can achieve a higher state through prayer, moral behavior, or mental discipline.
Designer Biology: The Ethics of Intensively Engineering Biological and Ecological Systems consists of thirteen chapters that address the ethical issues raised by technological intervention and design across a broad range of biological and ecological systems. Among the technologies addressed are geoengineering, human enhancement, sex selection, genetic modification, and synthetic biology.
This paper explores the ethical implications of a possible future technology; namely cryonics of embryos/fetuses extracted from the uterus. We argue that more research should be conducted in order to explore the feasibility of such technology. We highlight the advantages that this option would offer; including the foreseeable prevention of a considerable number of abortions.