Results for 'existential risks'

999 found
Order:
  1. Existential risk from AI and orthogonality: Can we have it both ways?Vincent C. Müller & Michael Cannon - 2021 - Ratio 35 (1):25-36.
    The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  2.  38
    Existential Risk and Equal Political Liberty.J. Joseph Porter & Adam F. Gibbons - forthcoming - Asian Journal of Philosophy.
    Rawls famously argues that the parties in the original position would agree upon the two principles of justice. Among other things, these principles guarantee equal political liberty—that is, democracy—as a requirement of justice. We argue on the contrary that the parties have reason to reject this requirement. As we show, by Rawls’ own lights, the parties would be greatly concerned to mitigate existential risk. But it is doubtful whether democracy always minimizes such risk. Indeed, no one currently knows which (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  3. Existential risk pessimism and the time of perils.David Thorstad - manuscript
    When our choice affects some other person and the outcome is unknown, it has been argued that we should defer to their risk attitude, if known, or else default to use of a risk avoidant risk function. This, in turn, has been claimed to require the use of a risk avoidant risk function when making decisions that primarily affect future people, and to decrease the desirability of efforts to prevent human extinction, owing to the significant risks associated with continued (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  4. Existential Risks: Exploring a Robust Risk Reduction Strategy.Karim Jebari - 2015 - Science and Engineering Ethics 21 (3):541-554.
    A small but growing number of studies have aimed to understand, assess and reduce existential risks, or risks that threaten the continued existence of mankind. However, most attention has been focused on known and tangible risks. This paper proposes a heuristic for reducing the risk of black swan extinction events. These events are, as the name suggests, stochastic and unforeseen when they happen. Decision theory based on a fixed model of possible outcomes cannot properly deal with (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  5. Existential risks: a philosophical analysis.Phil Torres - 2023 - Inquiry: An Interdisciplinary Journal of Philosophy 66 (4):614-639.
    This paper examines and analyzes five definitions of ‘existential risk.’ It tentatively adopts a pluralistic approach according to which the definition that scholars employ should depend upon the particular context of use. More specifically, the notion that existential risks are ‘risks of human extinction or civilizational collapse’ is best when communicating with the public, whereas equating existential risks with a ‘significant loss of expected value’ may be the most effective definition for establishing existential (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  6.  82
    The Precipice: Existential Risk and the Future of Humanity.Toby Ord - 2020 - London: Bloomsbury.
    Humanity stands at a precipice. -/- Our species could survive for millions of generations — enough time to end disease, poverty, and injustice; to reach new heights of flourishing. But this vast future is at risk. With the advent of nuclear weapons, humanity entered a new age, gaining the power to destroy ourselves, without the wisdom to ensure we won’t. Since then, these dangers have only multiplied, from climate change to engineered pandemics and unaligned artificial intelligence. If we do not (...)
    Direct download  
     
    Export citation  
     
    Bookmark   71 citations  
  7. Existential risks: analyzing human extinction scenarios and related hazards.Nick Bostrom - 2002 - J Evol Technol 9 (1).
    Because of accelerating technological progress, humankind may be rapidly approaching a critical phase in its career. In addition to well-known threats such as nuclear holocaust, the propects of radically transforming technologies like nanotech systems and machine intelligence present us with unprecedented opportunities and risks. Our future, and whether we will have a future at all, may well be determined by how we deal with these challenges. In the case of radically transforming technologies, a better understanding of the transition dynamics (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   80 citations  
  8.  19
    Existential Risk, Climate Change, and Nonideal Justice.Alex McLaughlin - 2024 - The Monist 107 (2):190-206.
    Climate change is often described as an existential risk to the human species, but this terminology has generally been avoided in the climate-justice literature in analytic philosophy. I investigate the source of this disconnect and explore the prospects for incorporating the idea of climate change as an existential risk into debates about climate justice. The concept of existential risk does not feature prominently in these discussions, I suggest, because assumptions that structure ‘ideal’ accounts of climate justice ensure (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9. Confronting Existential Risks With Voluntary Moral Bioenhancement.Vojin Rakić & Milan M. Ćirković - 2016 - Journal of Evolution and Technology 26 (2):48-59.
    We outline an argument favoring voluntary moral bioenhancement as a response to existential risks humanity exposes itself to. We consider this type of enhancement a solution to the antithesis between the extinction of humanity and the imperative of humanity to survive at any cost. By opting for voluntary moral bioenhancement; we refrain from advocating illiberal or even totalitarian strategies that would allegedly help humanity preserve itself. We argue that such strategies; by encroaching upon the freedom of individuals; already (...)
     
    Export citation  
     
    Bookmark   3 citations  
  10.  58
    Existential risk, creativity & well-adapted science.Adrian Currie - 2019 - Studies in History and Philosophy of Science Part A 76:39-48.
  11. Existential Risk, Astronomical Waste, and the Reasonableness of a Pure Time Preference for Well-Being.S. J. Beard & Patrick Kaczmarek - 2024 - The Monist 107 (2):157-175.
    In this paper, we argue that our moral concern for future well-being should reduce over time due to important practical considerations about how humans interact with spacetime. After surveying several of these considerations (around equality, special duties, existential contingency, and overlapping moral concern) we develop a set of core principles that can both explain their moral significance and highlight why this is inherently bound up with our relationship with spacetime. These relate to the equitable distribution of (1) moral concern (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  12.  42
    Existential Risk, Creativity & Well-Adapted Science.Adrian Currie - forthcoming - Studies in History and Philosophy of Science.
  13. Space Colonization and Existential Risk.Joseph Gottlieb - 2019 - Journal of the American Philosophical Association 5 (3):306-320.
    Ian Stoner has recently argued that we ought not to colonize Mars because doing so would flout our pro tanto obligation not to violate the principle of scientific conservation, and there is no countervailing considerations that render our violation of the principle permissible. While I remain agnostic on, my primary goal in this article is to challenge : there are countervailing considerations that render our violation of the principle permissible. As such, Stoner has failed to establish that we ought not (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  14. How does Artificial Intelligence Pose an Existential Risk?Karina Vold & Daniel R. Harris - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    Alan Turing, one of the fathers of computing, warned that Artificial Intelligence (AI) could one day pose an existential risk to humanity. Today, recent advancements in the field AI have been accompanied by a renewed set of existential warnings. But what exactly constitutes an existential risk? And how exactly does AI pose such a threat? In this chapter we aim to answer these questions. In particular, we will critically explore three commonly cited reasons for thinking that AI (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  15.  75
    Discounting, Buck-Passing, and Existential Risk Mitigation: The Case of Space Colonization.Joseph Gottlieb - forthcoming - Space Policy.
    Large-scale, self-sufficient space colonization is a plausible means of efficiently reducing existential risks and ensuring our long-term survival. But humanity is by and large myopic, and as an intergenerational global public good, existential risk reduction is systematically undervalued, hampered by intergenerational discounting. This paper explores how these issues apply to space colonization, arguing that the motivational and psychological barriers to space colonization are a special—and especially strong—case of a more general problem. The upshot is not that large-scale, (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  16. Existential Risk Prevention as Global Priority.Nick Bostrom - 2013 - Global Policy 4 (1):15–31.
     
    Export citation  
     
    Bookmark   41 citations  
  17. Existential risks: New Zealand needs a method to agree on a value framework and how to quantify future lives at risk.Matthew Boyd & Nick Wilson - 2018 - Policy Quarterly 14 (3):58-65.
    Human civilisation faces a range of existential risks, including nuclear war, runaway climate change and superintelligent artificial intelligence run amok. As we show here with calculations for the New Zealand setting, large numbers of currently living and, especially, future people are potentially threatened by existential risks. A just process for resource allocation demands that we consider future generations but also account for solidarity with the present. Here we consider the various ethical and policy issues involved and (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  18.  75
    “White Crisis” and/as “Existential Risk,” or the Entangled Apocalypticism of Artificial Intelligence.Syed Mustafa Ali - 2019 - Zygon 54 (1):207-224.
    In this article, I present a critique of Robert Geraci's Apocalyptic artificial intelligence (AI) discourse, drawing attention to certain shortcomings which become apparent when the analytical lens shifts from religion to the race–religion nexus. Building on earlier work, I explore the phenomenon of existential risk associated with Apocalyptic AI in relation to “White Crisis,” a modern racial phenomenon with premodern religious origins. Adopting a critical race theoretical and decolonial perspective, I argue that all three phenomena are entangled and they (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  19.  3
    Existential risks.Nick Bostrom - manuscript
    Because of accelerating technological progress, humankind may be rapidly approaching a critical phase in its career. In addition to well-known threats such as nuclear holocaust, the prospects of radically transforming technologies like nanotech systems and machine intelligence present us with unprecedented opportunities and risks. Our future, and whether we will have a future at all, may well be determined by how we deal with these challenges. In the case of radically transforming technologies, a better understanding of the transition dynamics (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  20.  26
    Toby Ord, The Precipice. Existential Risk and the Future of Humanity. Bloomsbury Publishing, 2020, 480 s.Markéta Poledníková - 2020 - Pro-Fil 21 (1):91.
    Recenze knihy:Toby Ord, The Precipice. Existential Risk and the Future of Humanity. Bloomsbury Publishing, 2020, 480 s.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  21. The Fragile World Hypothesis: Complexity, Fragility, and Systemic Existential Risk.David Manheim - forthcoming - Futures.
    The possibility of social and technological collapse has been the focus of science fiction tropes for decades, but more recent focus has been on specific sources of existential and global catastrophic risk. Because these scenarios are simple to understand and envision, they receive more attention than risks due to complex interplay of failures, or risks that cannot be clearly specified. In this paper, we discuss the possibility that complexity of a certain type leads to fragility which can (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  22. Mistakes in the moral mathematics of existential risk.David Thorstad - forthcoming - Ethics.
    Longtermists have recently argued that it is overwhelmingly important to do what we can to mitigate existential risks to humanity. I consider three mistakes that are often made in calculating the value of existential risk mitigation. I show how correcting these mistakes pushes the value of existential risk mitigation substantially below leading estimates, potentially low enough to threaten the normative case for existential risk mitigation. I use this discussion to draw four positive lessons for the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23. High Risk, Low Reward: A Challenge to the Astronomical Value of Existential Risk Mitigation.David Thorstad - 2023 - Philosophy and Public Affairs 51 (4):373-412.
    Philosophy &Public Affairs, Volume 51, Issue 4, Page 373-412, Fall 2023.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  24.  47
    The Black Hole Challenge: Precaution, Existential Risks and the Problem of Knowledge Gaps.Christian Munthe - 2019 - Ethics, Policy and Environment 22 (1):49-60.
    So-called ‘existential risks’ present virtually unlimited reasons for probing them and responses to them further. The ensuing normative pull to respond to such risks thus seems to present us with r...
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  25.  30
    Probabilities, Methodologies and the Evidence Base in Existential Risk Assessments.Thomas Rowe & Simon Beard - manuscript
    This paper examines and evaluates a range of methodologies that have been proposed for making useful claims about the probability of phenomena that would contribute to existential risk. Section One provides a brief discussion of the nature of such claims, the contexts in which they tend to be made and the kinds of probability that they can contain. Section Two provides an overview of the methodologies that have been developed to arrive at these probabilities and assesses their advantages and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  26.  76
    Probabilities, Methodologies and the Evidence Base in Existential Risk Assessments.Thomas Rowe & Simon Beard - 2018
    This paper examines and evaluates a range of methodologies that have been proposed for making useful claims about the probability of phenomena that would contribute to existential risk. Section One provides a brief discussion of the nature of such claims, the contexts in which they tend to be made and the kinds of probability that they can contain. Section Two provides an overview of the methodologies that have been developed to arrive at these probabilities and assesses their advantages and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  27.  41
    Book Review: Phil Torres’s Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks[REVIEW]Steven Umbrello - 2018 - Futures 98:90-91.
    A new book by Phil Torres, Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks, is reviewed. Morality, Foresight and Human Flourishing is a primer intended to introduce students and interested scholars to the concepts and literature on existential risk. The book’s core methodology is to outline the various existential risks currently discussed in different disciplines and provides novel strategies for risk mitigation. The book is stylistically engaging, lucid and academically current, providing both novice (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28.  53
    Toby Ord, The Precipice: Existential Risk and the Future of Humanity, Bloomsbury, 2020.Benedikt Namdar & Thomas Pölzler - 2021 - Ethical Theory and Moral Practice 24 (3):855-857.
  29.  45
    Probabilities, Methodologies and the Evidence Base in Existential Risk Assessments.Thomas Rowe & Simon Beard - manuscript
    This paper examines and evaluates a range of methodologies that have been proposed for making useful claims about the probability of phenomena that would contribute to existential risk. Section One provides a brief discussion of the nature of such claims, the contexts in which they tend to be made and the kinds of probability that they can contain. Section Two provides an overview of the methodologies that have been developed to arrive at these probabilities and assesses their advantages and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30.  31
    Whose Survival? A Critical Engagement with the Notion of Existential Risk.Philip Højme - 2019 - Scientia et Fides 7 (2):63-76.
    This paper provides a critique of Bostrom’s concern with existential risks, a critique which relies on Adorno and Horkheimer’s interpretation of the Enlightenment. Their interpretation is used to elicit the inner contradictions of transhumanist thought and to show the invalid premises on which it is based. By first outlining Bostrom’s position this paper argues that transhumanism reverts to myth in its attempt to surpass the human condition. Bostrom’s argument is based on three pillars, Maxipok, Parfitian population ethics and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  31. Review of The Precipice: Existential Risk and the Future of Humanity. [REVIEW]Theron Pummer - 2020 - Notre Dame Philosophical Reviews 8.
  32.  19
    The Precipice: Existential Risk and the Future of Humanity. By Toby Ord. [REVIEW]Daniel John Sportiello - 2023 - American Catholic Philosophical Quarterly 97 (1):147-150.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  33.  23
    The precipice: Existential risk and the future of humanity. Ord, Toby. New York: Hachette, 2020. 468 pp. US$30. ISBN 9780316484916 (Hardback). [REVIEW]David Heyd - 2022 - Bioethics 36 (9):1001-1002.
  34.  12
    The precipice: Existential risk and the future of humanity. Ord, Toby. New York: Hachette, 2020. 468 pp. US$30. ISBN 9780316484916 (Hardback). [REVIEW]David Heyd - 2022 - Bioethics 36 (9):1001-1002.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  35.  7
    The precipice: Existential risk and the future of humanity. Ord, Toby. New York: Hachette, 2020. 468 pp. US$30. ISBN 9780316484916 (Hardback). [REVIEW]Reviewed by David Heyd - 2022 - Bioethics 36 (9):1001-1002.
    Bioethics, Volume 36, Issue 9, Page 1001-1002, November 2022.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  36. Language Agents Reduce the Risk of Existential Catastrophe.Simon Goldstein & Cameron Domenico Kirk-Giannini - forthcoming - AI and Society:1-11.
    Recent advances in natural language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and then make (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  37.  21
    The Threat of Longtermism: Is Ecological Catastrophe an Existential Risk? Disillusioned Ideals for a Bold, New Future.Sarah Frances Hicks & Dominika Janus - 2023 - Filozofia 78 (10S):133-148.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38.  28
    Welcome to the Machine: AI, Existential Risk, and the Iron Cage of Modernity.Jay A. Gupta - 2023 - Telos: Critical Theory of the Contemporary 2023 (203):163-169.
    ExcerptRecent advances in the functional power of artificial intelligence (AI) have prompted an urgent warning from industry leaders and researchers concerning its “profound risks to society and humanity.”1 Their open letter is admirable not only for its succinct identification of said risks, which include the mass dissemination of misinformation, loss of jobs, and even the possible extinction of our species, but also for its clear normative framing of the problem: “Should we let machines flood our information channels with (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39.  38
    The Precipice – Existential Risk and the Future of Humanity. Toby Ord, 2020 London, Bloomsbury Publishing. 480 pp, £22.50. [REVIEW]Martin Sand - 2021 - Journal of Applied Philosophy 38 (4):722-724.
    Journal of Applied Philosophy, EarlyView.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  40. Extinction Risks from AI: Invisible to Science?Vojtech Kovarik, Christiaan van Merwijk & Ida Mattsson - manuscript
    In an effort to inform the discussion surrounding existential risks from AI, we formulate Extinction-level Goodhart’s Law as “Virtually any goal specification, pursued to the extreme, will result in the extinction of humanity”, and we aim to understand which formal models are suitable for investigating this hypothesis. Note that we remain agnostic as to whether Extinction-level Goodhart’s Law holds or not. As our key contribution, we identify a set of conditions that are necessary for a model that aims (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  41.  51
    Concepts of Existential Catastrophe.Hilary Greaves - 2024 - The Monist 107 (2):109-129.
    The notion of existential catastrophe is increasingly appealed to in discussion of risk management around emerging technologies, but it is not completely clear what this notion amounts to. Here, I provide an opinionated survey of the space of plausibly useful definitions of existential catastrophe. Inter alia, I discuss: whether to define existential catastrophe in ex post or ex ante terms, whether an ex ante definition should be in terms of loss of expected value or loss of potential, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  42. Risks of artificial intelligence.Vincent C. Müller (ed.) - 2016 - CRC Press - Chapman & Hall.
    Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters (...)
  43.  53
    An Assessment of Existential Worldview Function among Young Women at Risk for Depression and Anxiety—A Multi-Method Study.Christina Sophia Lloyd, Britt af Klinteberg & Valerie DeMarinis - 2017 - Archive for the Psychology of Religion 39 (2):165-203.
    Increasing rates of psychiatric problems like depression and anxiety among Swedish youth, predominantly among females, are considered a serious public mental health concern. Multiple studies confirm that psychological as well as existential vulnerability manifest in different ways for youths in Sweden. This multi-method study aimed at assessing existential worldview function by three factors: 1) existential worldview, 2) ontological security, and 3) self-concept, attempting to identify possible protective and risk factors for mental ill-health among female youths at risk (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  44. Evaluating approaches for reducing catastrophic risks from AI.Leonard Dung - 2024 - AI and Ethics.
    According to a growing number of researchers, AI may pose catastrophic – or even existentialrisks to humanity. Catastrophic risks may be taken to be risks of 100 million human deaths, or a similarly bad outcome. I argue that such risks – while contested – are sufficiently likely to demand rigorous discussion of potential societal responses. Subsequently, I propose four desiderata for approaches to the reduction of catastrophic risks from AI. The quality of (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  45.  8
    The benefits and risks of nostalgia: analysis of a fictional case with special reference to ethical and existential issues.Emmanuel Bäckryd - 2023 - Philosophy, Ethics and Humanities in Medicine 18 (1):1-7.
    Background In a previous paper in Philos Ethics Humanit Med, the 1937 Swedish novel Sömnlös (Swedish for sleepless) by Vilhelm Moberg was used as background for a thought experiment, in which last century’s progresses concerning the safety of sleeping pills were projected into the future. This gave rise to a theoretical discussion about broad medico-philosophical questions such as (among other things) the concept of pharmaceuticalisation. Methods In this follow-up paper, the theme of insomnia in Sömnlös is complemented by a discussion (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  46. Respect for others’ risk attitudes and the long-run future.Andreas Mogensen - manuscript
    When our choice affects some other person and the outcome is unknown, it has been argued that we should defer to their risk attitude, if known, or else default to use of a risk avoidant risk function. This, in turn, has been claimed to require the use of a risk avoidant risk function when making decisions that primarily affect future people, and to decrease the desirability of efforts to prevent human extinction, owing to the significant risks associated with continued (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47.  22
    The Myth of “Just” Nuclear Deterrence: Time for a New Strategy to Protect Humanity from Existential Nuclear Risk.Joan Rohlfing - 2023 - Ethics and International Affairs 37 (1):39-49.
    Nuclear weapons are different from every other type of weapons technology. Their awesome destructive potential and the unparalleled consequences of their use oblige us to think critically about the ethics of nuclear possession, planning, and use. Joe Nye has given the ethics of nuclear weapons deep consideration. He posits that we have a basic moral obligation to future generations to preserve roughly equal access to important values, including equal chances of survival, and proposes criteria for achieving conditional or “just deterrence” (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  48. Autonomy and Machine Learning as Risk Factors at the Interface of Nuclear Weapons, Computers and People.S. M. Amadae & Shahar Avin - 2019 - In Vincent Boulanin (ed.), The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk: Euro-Atlantic Perspectives. Stockholm, Sweden: pp. 105-118.
    This article assesses how autonomy and machine learning impact the existential risk of nuclear war. It situates the problem of cyber security, which proceeds by stealth, within the larger context of nuclear deterrence, which is effective when it functions with transparency and credibility. Cyber vulnerabilities poses new weaknesses to the strategic stability provided by nuclear deterrence. This article offers best practices for the use of computer and information technologies integrated into nuclear weapons systems. Focusing on nuclear command and control, (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  49. Agential Risks: A Comprehensive Introduction.Phil Torres - 2016 - Journal of Evolution and Technology 26 (2):31-47.
    The greatest existential threats to humanity stem from increasingly powerful advanced technologies. Yet the “risk potential” of such tools can only be realized when coupled with a suitable agent who; through error or terror; could use the tool to bring about an existential catastrophe. While the existential risk literature has provided many accounts of how advanced technologies might be misused and abused to cause unprecedented harm; no scholar has yet explored the other half of the agent-tool coupling; (...)
    No categories
     
    Export citation  
     
    Bookmark   5 citations  
  50. Editorial: Risks of artificial intelligence.Vincent C. Müller - 2015 - In Risks of general intelligence. CRC Press - Chapman & Hall. pp. 1-8.
    If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – and critically (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 999