A major approach to the ethics of artificial intelligence is to use social choice, in which the AI is designed to act according to the aggregate views of society. This is found in the AI ethics of “coherent extrapolated volition” and “bottom–up ethics”. This paper shows that the normative basis of AI social choice ethics is weak due to the fact that there is no one single aggregate ethical view of society. Instead, the design of social choice AI faces three (...) sets of decisions: standing, concerning whose ethics views are included; measurement, concerning how their views are identified; and aggregation, concerning how individual views are combined to a single view that will guide AI behavior. These decisions must be made up front in the initial AI design—designers cannot “let the AI figure it out”. Each set of decisions poses difficult ethical dilemmas with major consequences for AI behavior, with some decision options yielding pathological or even catastrophic results. Furthermore, non-social choice ethics face similar issues, such as whether to count future generations or the AI itself. These issues can be more important than the question of whether or not to use social choice ethics. Attention should focus on these issues, not on social choice. (shrink)
Purpose This paper aims to formalize long-term trajectories of human civilization as a scientific and ethical field of study. The long-term trajectory of human civilization can be defined as the path that human civilization takes during the entire future time period in which human civilization could continue to exist. -/- Design/methodology/approach This paper focuses on four types of trajectories: status quo trajectories, in which human civilization persists in a state broadly similar to its current state into the distant future; catastrophe (...) trajectories, in which one or more events cause significant harm to human civilization; technological transformation trajectories, in which radical technological breakthroughs put human civilization on a fundamentally different course; and astronomical trajectories, in which human civilization expands beyond its home planet and into the accessible portions of the cosmos. -/- Findings Status quo trajectories appear unlikely to persist into the distant future, especially in light of long-term astronomical processes. Several catastrophe, technological transformation and astronomical trajectories appear possible. -/- Originality/value Some current actions may be able to affect the long-term trajectory. Whether these actions should be pursued depends on a mix of empirical and ethical factors. For some ethical frameworks, these actions may be especially important to pursue. (shrink)
This paper discusses means for promoting artificial intelligence that is designed to be safe and beneficial for society. The promotion of beneficial AI is a social challenge because it seeks to motivate AI developers to choose beneficial AI designs. Currently, the AI field is focused mainly on building AIs that are more capable, with little regard to social impacts. Two types of measures are available for encouraging the AI field to shift more toward building beneficial AI. Extrinsic measures impose constraints (...) or incentives on AI researchers to induce them to pursue beneficial AI even if they do not want to. Intrinsic measures encourage AI researchers to want to pursue beneficial AI. Prior research focuses on extrinsic measures, but intrinsic measures are at least as important. Indeed, intrinsic factors can determine the success of extrinsic measures. Efforts to promote beneficial AI must consider intrinsic factors by studying the social psychology of AI research communities. (shrink)
In this essay we develop and argue for the adoption of a more comprehensive model of research ethics than is included within current conceptions of responsible conduct of research (RCR). We argue that our model, which we label the ethical dimensions of scientific research (EDSR), is a more comprehensive approach to encouraging ethically responsible scientific research compared to the currently typically adopted approach in RCR training. This essay focuses on developing a pedagogical approach that enables scientists to better understand and (...) appreciate one important component of this model, what we call intrinsic ethics . Intrinsic ethical issues arise when values and ethical assumptions are embedded within scientific findings and analytical methods. Through a close examination of a case study and its application in teaching, namely, evaluation of climate change integrated assessment models, this paper develops a method and case for including intrinsic ethics within research ethics training to provide scientists with a comprehensive understanding and appreciation of the critical role of values and ethical choices in the production of research outcomes. (shrink)
In this essay we develop and argue for the adoption of a more comprehensive model of research ethics than is included within current conceptions of responsible conduct of research (RCR). We argue that our model, which we label the ethical dimensions of scientific research (EDSR), is a more comprehensive approach to encouraging ethically responsible scientific research compared to the currently typically adopted approach in RCR training. This essay focuses on developing a pedagogical approach that enables scientists to better understand and (...) appreciate one important component of this model, what we call intrinsic ethics . Intrinsic ethical issues arise when values and ethical assumptions are embedded within scientific findings and analytical methods. Through a close examination of a case study and its application in teaching, namely, evaluation of climate change integrated assessment models, this paper develops a method and case for including intrinsic ethics within research ethics training to provide scientists with a comprehensive understanding and appreciation of the critical role of values and ethical choices in the production of research outcomes. (shrink)
Artificial intelligence experts are currently divided into “presentist” and “futurist” factions that call for attention to near-term and long-term AI, respectively. This paper argues that the presentist–futurist dispute is not the best focus of attention. Instead, the paper proposes a reconciliation between the two factions based on a mutual interest in AI. The paper further proposes realignment to two new factions: an “intellectualist” faction that seeks to develop AI for intellectual reasons and a “societalist faction” that seeks to develop AI (...) for the benefit of society. The paper argues in favor of societalism and offers three means of concurrently addressing societal impacts from near-term and long-term AI: advancing societalist social norms, thereby increasing the portion of AI researchers who seek to benefit society; technical research on how to make any AI more beneficial to society; and policy to improve the societal benefits of all AI. In practice, it will often be advantageous to emphasize near-term AI due to the greater interest in near-term AI among AI and policy communities alike. However, presentist and futurist societalists alike can benefit from each others’ advocacy for attention to the societal impacts of AI. The reconciliation between the presentist and futurist factions can improve both near-term and long-term societal impacts of AI. (shrink)
The National Science Foundation's Second Merit Criterion, or Broader Impacts Criterion , was introduced in 1997 as the result of an earlier Congressional movement to enhance the accountability and responsibility as well as the effectiveness of federally funded projects. We demonstrate that a robust understanding and appreciation of NSF BIC argues for a broader conception of research ethics in the sciences than is currently offered in Responsible Conduct of Research training. This essay advocates augmenting RCR education with training regarding broader (...) impacts. We demonstrate that enhancing research ethics training in this way provides a more comprehensive understanding of the ethics relevant to scientific research and prepares scientists to think not only in terms of responsibly conducted science, but also of the role of science in responding to identified social needs and in adhering to principles of social justice. As universities respond to the mandate from America COMPETES to “provide training and oversight in the responsible and ethical conduct of research”, we urge institutions to embrace a more adequate conception of research ethics, what we call the Ethical Dimensions of Scientific Research, that addresses the full range of ethical issues relevant to scientific inquiry, including ethical issues related to the broader impacts of scientific research and practice. (shrink)
Atomically precise manufacturing (APM) is the assembly of materials with atomic precision. APM does not currently exist, and may not be feasible, but if it is feasible, then the societal impacts could be dramatic. This paper assesses the net societal impacts of APM across the full range of important APM sectors: general material wealth, environmental issues, military affairs, surveillance, artificial intelligence, and space travel. Positive effects were found for material wealth, the environment, military affairs (specifically nuclear disarmament), and space travel. (...) Negative effects were found for military affairs (specifically rogue actor violence and AI. The net effect for surveillance was ambiguous. The effects for the environment, military affairs, and AI appear to be the largest, with the environment perhaps being the largest of these, suggesting that APM would be net beneficial to society. However, these factors are not well quantified and no definitive conclusion can be made. One conclusion that can be reached is that if APM R&D is pursued, it should go hand-in-hand with effective governance strategies to increase the benefits and reduce the harms. (shrink)
Cost-benefit analysis (CBA) evaluates actions in terms of negative consequences (costs) and positive consequences (benefits). Though much has been said on CBA, little attention has been paid to the types of values held by costs and benefits. This paper introduces a simple typology of values in CBA and applies it to three forms of CBA: the common, money-based CBA, CBA based in social welfare, and CBA based in intrinsic value. The latter extends CBA beyond its usual anthropocentric domain. Adequate handling (...) of value typology in CBA avoids analytical mistakes and connects CBA to its consequentialist roots. (shrink)
In a recent editorial, Raymond Spier expresses skepticism over claims that climate change is driven by human actions and that humanity should act to avoid climate change. This paper responds to this skepticism as part of a broader review of the science and ethics of climate change. While much remains uncertain about the climate, research indicates that observed temperature increases are human-driven. Although opinions vary regarding what should be done, prominent arguments against action are based on dubious factual and ethical (...) positions. Thus, the skepticisms in the recent editorial are unwarranted. This does not diminish the general merits of skeptical intellectual inquiry. (shrink)
This paper considers the question: In what ways can artificial intelligence assist with interdisciplinary research for addressing complex societal problems and advancing the social good? Problems such as environmental protection, public health, and emerging technology governance do not fit neatly within traditional academic disciplines and therefore require an interdisciplinary approach. However, interdisciplinary research poses large cognitive challenges for human researchers that go beyond the substantial challenges of narrow disciplinary research. The challenges include epistemic divides between disciplines, the massive bodies of (...) relevant literature, the peer review of work that integrates an eclectic mix of topics, and the transfer of interdisciplinary research insights from one problem to another. Artificial interdisciplinarity already helps with these challenges via search engines, recommendation engines, and automated content analysis. Future “strong artificial interdisciplinarity” based on human-level artificial general intelligence could excel at interdisciplinary research, but it may take a long time to develop and could pose major safety and ethical issues. Therefore, there is an important role for intermediate-term artificial interdisciplinarity systems that could make major contributions to addressing societal problems without the concerns associated with artificial general intelligence. (shrink)