Results for 'human extinction, existential risk, population growth, global catastrophic risks'

999 found
Order:
  1. Existential risks: a philosophical analysis.Phil Torres - 2023 - Inquiry: An Interdisciplinary Journal of Philosophy 66 (4):614-639.
    This paper examines and analyzes five definitions of ‘existential risk.’ It tentatively adopts a pluralistic approach according to which the definition that scholars employ should depend upon the particular context of use. More specifically, the notion that existential risks are ‘risks of human extinction or civilizational collapse’ is best when communicating with the public, whereas equating existential risks with a ‘significant loss of expected value’ may be the most effective definition for establishing (...) risk studies as a legitimate field of scientific and philosophical inquiry. In making these arguments, the present paper hopes to provide a modicum of clarity to foundational issues relating to the central concept of arguably the most important discussion of our times. (shrink)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  2.  19
    Existential Risk, Climate Change, and Nonideal Justice.Alex McLaughlin - 2024 - The Monist 107 (2):190-206.
    Climate change is often described as an existential risk to the human species, but this terminology has generally been avoided in the climate-justice literature in analytic philosophy. I investigate the source of this disconnect and explore the prospects for incorporating the idea of climate change as an existential risk into debates about climate justice. The concept of existential risk does not feature prominently in these discussions, I suggest, because assumptions that structure ‘ideal’ accounts of climate justice (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3. Existential risks: analyzing human extinction scenarios and related hazards.Nick Bostrom - 2002 - J Evol Technol 9 (1).
    Because of accelerating technological progress, humankind may be rapidly approaching a critical phase in its career. In addition to well-known threats such as nuclear holocaust, the propects of radically transforming technologies like nanotech systems and machine intelligence present us with unprecedented opportunities and risks. Our future, and whether we will have a future at all, may well be determined by how we deal with these challenges. In the case of radically transforming technologies, a better understanding of the transition dynamics (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   80 citations  
  4. Aquatic refuges for surviving a global catastrophe.Alexey Turchin & Brian Green - 2017 - Futures 89:26-37.
    Recently many methods for reducing the risk of human extinction have been suggested, including building refuges underground and in space. Here we will discuss the perspective of using military nuclear submarines or their derivatives to ensure the survival of a small portion of humanity who will be able to rebuild human civilization after a large catastrophe. We will show that it is a very cost-effective way to build refuges, and viable solutions exist for various budgets and timeframes. Nuclear (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  5. Existential risk pessimism and the time of perils.David Thorstad - manuscript
    When our choice affects some other person and the outcome is unknown, it has been argued that we should defer to their risk attitude, if known, or else default to use of a risk avoidant risk function. This, in turn, has been claimed to require the use of a risk avoidant risk function when making decisions that primarily affect future people, and to decrease the desirability of efforts to prevent human extinction, owing to the significant risks associated with (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  6. Existential Risks: Exploring a Robust Risk Reduction Strategy.Karim Jebari - 2015 - Science and Engineering Ethics 21 (3):541-554.
    A small but growing number of studies have aimed to understand, assess and reduce existential risks, or risks that threaten the continued existence of mankind. However, most attention has been focused on known and tangible risks. This paper proposes a heuristic for reducing the risk of black swan extinction events. These events are, as the name suggests, stochastic and unforeseen when they happen. Decision theory based on a fixed model of possible outcomes cannot properly deal with (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  7.  50
    Existential risks.Nick Bostrom - manuscript
    Because of accelerating technological progress, humankind may be rapidly approaching a critical phase in its career. In addition to well-known threats such as nuclear holocaust, the prospects of radically transforming technologies like nanotech systems and machine intelligence present us with unprecedented opportunities and risks. Our future, and whether we will have a future at all, may well be determined by how we deal with these challenges. In the case of radically transforming technologies, a better understanding of the transition dynamics (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  8. Human Extinction from a Thomist Perspective.Stefan Riedener - 2021 - In Stefan Riedener, Dominic Roser & Markus Huppenbauer (eds.), Effective Altruism and Religion: Synergies, Tensions, Dialogue. Baden-Baden, Germany: Nomos. pp. 187-210.
    Existential risks” are risks that threaten the destruction of humanity’s long-term potential: risks of nuclear wars, pandemics, supervolcano eruptions, and so on. On standard utilitarianism, it seems, the reduction of such risks should be a key global priority today. Many effective altruists agree with this verdict. But how should the importance of these risks be assessed on a Christian moral theory? In this paper, I begin to answer this question – taking Thomas Aquinas (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  9.  28
    Global Catastrophic Risks.Nick Bostrom & Milan M. Cirkovic (eds.) - 2008 - Oxford University Press.
    A Global Catastrophic Risk is one that has the potential to inflict serious damage to human well-being on a global scale. This book focuses on such risks arising from natural catastrophes, nuclear war, terrorism, biological weapons, totalitarianism, advanced nanotechnology, artificial intelligence and social collapse.
    Direct download  
     
    Export citation  
     
    Bookmark   21 citations  
  10. Classification of Global Catastrophic Risks Connected with Artificial Intelligence.Alexey Turchin & David Denkenberger - 2020 - AI and Society 35 (1):147-163.
    A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global (...) failure could happen at various levels of AI development, namely, before it starts self-improvement, during its takeoff, when it uses various instruments to escape its initial confinement, or after it successfully takes over the world and starts to implement its goal system, which could be plainly unaligned, or feature-flawed friendliness. AI could also halt at later stages of its development either due to technical glitches or ontological problems. Overall, we identified around several dozen scenarios of AI-driven global catastrophe. The extent of this list illustrates that there is no one simple solution to the problem of AI safety, and that AI safety theory is complex and must be customized for each AI development level. (shrink)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  11.  14
    Global Catastrophic Risk and the Drivers of Scientist Attitudes Towards Policy.Christopher Nathan & Keith Hyams - 2022 - Science and Engineering Ethics 28 (6):1-18.
    An anthropogenic global catastrophic risk is a human-induced risk that threatens sustained and wide-scale loss of life and damage to civilisation across the globe. In order to understand how new research on governance mechanisms for emerging technologies might assuage such risks, it is important to ask how perceptions, beliefs, and attitudes towards the governance of global catastrophic risk within the research community shape the conduct of potentially risky research. The aim of this study is (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  12. Confronting Existential Risks With Voluntary Moral Bioenhancement.Vojin Rakić & Milan M. Ćirković - 2016 - Journal of Evolution and Technology 26 (2):48-59.
    We outline an argument favoring voluntary moral bioenhancement as a response to existential risks humanity exposes itself to. We consider this type of enhancement a solution to the antithesis between the extinction of humanity and the imperative of humanity to survive at any cost. By opting for voluntary moral bioenhancement; we refrain from advocating illiberal or even totalitarian strategies that would allegedly help humanity preserve itself. We argue that such strategies; by encroaching upon the freedom of individuals; already (...)
     
    Export citation  
     
    Bookmark   3 citations  
  13.  10
    For the Good of the Globe: Moral Reasons for States to Mitigate Global Catastrophic Biological Risks.Tess F. Johnson - forthcoming - Journal of Bioethical Inquiry:1-12.
    Actions to prepare for and prevent pandemics are a common topic for bioethical analysis. However, little attention has been paid to global catastrophic biological risks more broadly, including pandemics with artificial origins, the creation of agents for biological warfare, and harmful outcomes of human genome editing. What’s more, international policy discussions often focus on economic arguments for state action, ignoring a key potential set of reasons for states to mitigate global catastrophic biological risks: (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Respect for others’ risk attitudes and the long-run future.Andreas Mogensen - manuscript
    When our choice affects some other person and the outcome is unknown, it has been argued that we should defer to their risk attitude, if known, or else default to use of a risk avoidant risk function. This, in turn, has been claimed to require the use of a risk avoidant risk function when making decisions that primarily affect future people, and to decrease the desirability of efforts to prevent human extinction, owing to the significant risks associated with (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15.  75
    Discounting, Buck-Passing, and Existential Risk Mitigation: The Case of Space Colonization.Joseph Gottlieb - forthcoming - Space Policy.
    Large-scale, self-sufficient space colonization is a plausible means of efficiently reducing existential risks and ensuring our long-term survival. But humanity is by and large myopic, and as an intergenerational global public good, existential risk reduction is systematically undervalued, hampered by intergenerational discounting. This paper explores how these issues apply to space colonization, arguing that the motivational and psychological barriers to space colonization are a special—and especially strong—case of a more general problem. The upshot is not that (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  16.  17
    The Risks to Human Evolution Posed by World Population Growth, Environmental and Ecosystem Pollution and the COVID-19 Pandemic.Marta Toraldo, Luana Conte & Domenico Maurizio Toraldo - 2021 - Philosophy Study 11 (3).
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17. How does Artificial Intelligence Pose an Existential Risk?Karina Vold & Daniel R. Harris - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    Alan Turing, one of the fathers of computing, warned that Artificial Intelligence (AI) could one day pose an existential risk to humanity. Today, recent advancements in the field AI have been accompanied by a renewed set of existential warnings. But what exactly constitutes an existential risk? And how exactly does AI pose such a threat? In this chapter we aim to answer these questions. In particular, we will critically explore three commonly cited reasons for thinking that AI (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  18.  95
    Anthropic shadow: observation selection effects and human extinction risks.Milan M. Ćirković, Anders Sandberg & Nick Bostrom - unknown
    We describe a significant practical consequence of taking anthropic biases into account in deriving predictions for rare stochastic catastrophic events. The risks associated with catastrophes such as asteroidal/cometary impacts, supervolcanic episodes, and explosions of supernovae/gamma-ray bursts are based on their observed frequencies. As a result, the frequencies of catastrophes that destroy or are otherwise incompatible with the existence of observers are systematically underestimated. We describe the consequences of the anthropic bias for estimation of catastrophic risks, and (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  19. The Fragile World Hypothesis: Complexity, Fragility, and Systemic Existential Risk.David Manheim - forthcoming - Futures.
    The possibility of social and technological collapse has been the focus of science fiction tropes for decades, but more recent focus has been on specific sources of existential and global catastrophic risk. Because these scenarios are simple to understand and envision, they receive more attention than risks due to complex interplay of failures, or risks that cannot be clearly specified. In this paper, we discuss the possibility that complexity of a certain type leads to fragility (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20. Language Agents Reduce the Risk of Existential Catastrophe.Simon Goldstein & Cameron Domenico Kirk-Giannini - forthcoming - AI and Society:1-11.
    Recent advances in natural language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and then (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  21. On theory X and what matters most.Simon Beard & Patrick Kaczmarek - 2022 - In Jeff McMahan, Tim Campbell, James Goodrich & Ketan Ramakrishan (eds.), Ethics and Existence: The Legacy of Derek Parfit. Oxford: Oxford University Press. pp. 358-386.
    One of Derek Parfit’s greatest legacies was the search for Theory X, a theory of population ethics that avoided all the implausible conclusions and paradoxes that have dogged the field since its inception: the Absurd Conclusion, the Repugnant Conclusion, the Non-Identity Problem, and the Mere Addition Paradox. In recent years, it has been argued that this search is doomed to failure and no satisfactory population axiology is possible. This chapter reviews Parfit’s life’s work in the field and argues (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  22.  22
    The Myth of “Just” Nuclear Deterrence: Time for a New Strategy to Protect Humanity from Existential Nuclear Risk.Joan Rohlfing - 2023 - Ethics and International Affairs 37 (1):39-49.
    Nuclear weapons are different from every other type of weapons technology. Their awesome destructive potential and the unparalleled consequences of their use oblige us to think critically about the ethics of nuclear possession, planning, and use. Joe Nye has given the ethics of nuclear weapons deep consideration. He posits that we have a basic moral obligation to future generations to preserve roughly equal access to important values, including equal chances of survival, and proposes criteria for achieving conditional or “just deterrence” (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23. Extinction Risks from AI: Invisible to Science?Vojtech Kovarik, Christiaan van Merwijk & Ida Mattsson - manuscript
    In an effort to inform the discussion surrounding existential risks from AI, we formulate Extinction-level Goodhart’s Law as “Virtually any goal specification, pursued to the extreme, will result in the extinction of humanity”, and we aim to understand which formal models are suitable for investigating this hypothesis. Note that we remain agnostic as to whether Extinction-level Goodhart’s Law holds or not. As our key contribution, we identify a set of conditions that are necessary for a model that aims (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  24. The Moral Case for Long-Term Thinking.Hilary Greaves, William MacAskill & Elliott Thornley - forthcoming - In Natalie Cargill & Tyler M. John (eds.), The Long View: Essays on Policy, Philanthropy, and the Long-Term Future. London: FIRST. pp. 19-28.
    This chapter makes the case for strong longtermism: the claim that, in many situations, impact on the long-run future is the most important feature of our actions. Our case begins with the observation that an astronomical number of people could exist in the aeons to come. Even on conservative estimates, the expected future population is enormous. We then add a moral claim: all the consequences of our actions matter. In particular, the moral importance of what happens does not depend (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25.  49
    Current cases of AI misalignment and their implications for future risks.Leonard Dung - 2023 - Synthese 202 (5):1-23.
    How can one build AI systems such that they pursue the goals their designers want them to pursue? This is the alignment problem. Numerous authors have raised concerns that, as research advances and systems become more powerful over time, misalignment might lead to catastrophic outcomes, perhaps even to the extinction or permanent disempowerment of humanity. In this paper, I analyze the severity of this risk based on current instances of misalignment. More specifically, I argue that contemporary large language models (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  26.  31
    Whose Survival? A Critical Engagement with the Notion of Existential Risk.Philip Højme - 2019 - Scientia et Fides 7 (2):63-76.
    This paper provides a critique of Bostrom’s concern with existential risks, a critique which relies on Adorno and Horkheimer’s interpretation of the Enlightenment. Their interpretation is used to elicit the inner contradictions of transhumanist thought and to show the invalid premises on which it is based. By first outlining Bostrom’s position this paper argues that transhumanism reverts to myth in its attempt to surpass the human condition. Bostrom’s argument is based on three pillars, Maxipok, Parfitian population (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  27. Should longtermists recommend hastening extinction rather than delaying it?Richard Pettigrew - 2024 - The Monist 107 (2):130-145.
    Longtermism is the view that the most urgent global priorities, and those to which we should devote the largest portion of our resources, are those that focus on (i) ensuring a long future for humanity, and perhaps sentient or intelligent life more generally, and (ii) improving the quality of the lives that inhabit that long future. While it is by no means the only one, the argument most commonly given for this conclusion is that these interventions have greater expected (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  28. Evaluating approaches for reducing catastrophic risks from AI.Leonard Dung - 2024 - AI and Ethics.
    According to a growing number of researchers, AI may pose catastrophic – or even existentialrisks to humanity. Catastrophic risks may be taken to be risks of 100 million human deaths, or a similarly bad outcome. I argue that such risks – while contested – are sufficiently likely to demand rigorous discussion of potential societal responses. Subsequently, I propose four desiderata for approaches to the reduction of catastrophic risks from (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  29. Population and Environment.Elizabeth Cripps - 2017 - In Stephen M. Gardiner & Allen Thompson (eds.), Oxford Handbook of Environmental Ethics. Oxford University Press.
    Human population growth, along with technological development and levels of consumption, is a key driver of our devastating impact on the environment. This must be acknowledged as a matter of urgency. Otherwise, we risk bequeathing future generations a tragic choice between introducing explicitly impermissible coercive population policies, becoming incapable of securing even basic human rights, and worsening climate change and other environmental damage. However, this chapter warns against approaching questions of population from too narrow an (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  30. The risk that humans will soon be extinct.John Leslie - 2010 - Philosophy 85 (4):447-463.
    If it survives for a little longer, the human race will probably start to spread across its galaxy. Germ warfare, though, or environmental collapse or many another factor might shortly drive humans to extinction. Are they likely to avoid it? Well, suppose they spread across the galaxy. Of all humans who would ever have been born, maybe only one in a hundred thousand would have lived as early as you. If, in contrast, humans soon became extinct then because of (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  31.  28
    Welcome to the Machine: AI, Existential Risk, and the Iron Cage of Modernity.Jay A. Gupta - 2023 - Telos: Critical Theory of the Contemporary 2023 (203):163-169.
    ExcerptRecent advances in the functional power of artificial intelligence (AI) have prompted an urgent warning from industry leaders and researchers concerning its “profound risks to society and humanity.”1 Their open letter is admirable not only for its succinct identification of said risks, which include the mass dissemination of misinformation, loss of jobs, and even the possible extinction of our species, but also for its clear normative framing of the problem: “Should we let machines flood our information channels with (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32. Artificial Intelligence: Arguments for Catastrophic Risk.Adam Bales, William D'Alessandro & Cameron Domenico Kirk-Giannini - 2024 - Philosophy Compass 19 (2):e12964.
    Recent progress in artificial intelligence (AI) has drawn attention to the technology’s transformative potential, including what some see as its prospects for causing large-scale harm. We review two influential arguments purporting to show how AI could pose catastrophic risks. The first argument — the Problem of Power-Seeking — claims that, under certain assumptions, advanced AI systems are likely to engage in dangerous power-seeking behavior in pursuit of their goals. We review reasons for thinking that AI systems might seek (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  33. Climate Ethics and Population Policy.Philip Cafaro - 2012 - WIREs Climate Change 3 (1):45–61.
    According to the Intergovernmental Panel on Climate Change, human population growth is one of the two primary causes of increased greenhouse gas emissions and accelerating global climate change. Slowing or ending population growth could be a cost effective, environmentally advantageous means to mitigate climate change, providing important benefits to both human and natural communities. Yet population policy has attracted relatively little attention from ethicists, policy analysts, or policy makers dealing with this issue. In part, (...)
     
    Export citation  
     
    Bookmark   8 citations  
  34. Existential Risk Prevention as Global Priority.Nick Bostrom - 2013 - Global Policy 4 (1):15–31.
     
    Export citation  
     
    Bookmark   41 citations  
  35.  35
    Livelihood change, farming, and managing flood risk in the Lerma Valley, Mexico.Hallie Eakin & Kirsten Appendini - 2008 - Agriculture and Human Values 25 (4):555-566.
    In face of rising flood losses globally, the approach of “living with floods,” rather than relying on structural measures for flood control and prevention, is acquiring greater resonance in diverse socioeconomic contexts. In the Lerma Valley in the state of Mexico, rapid industrialization, population growth, and the declining value of agricultural products are driving livelihood and land use change, exposing increasing numbers of people to flooding. However, data collected in two case studies of farm communities affected by flooding in (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  36.  82
    The Precipice: Existential Risk and the Future of Humanity.Toby Ord - 2020 - London: Bloomsbury.
    Humanity stands at a precipice. -/- Our species could survive for millions of generations — enough time to end disease, poverty, and injustice; to reach new heights of flourishing. But this vast future is at risk. With the advent of nuclear weapons, humanity entered a new age, gaining the power to destroy ourselves, without the wisdom to ensure we won’t. Since then, these dangers have only multiplied, from climate change to engineered pandemics and unaligned artificial intelligence. If we do not (...)
    Direct download  
     
    Export citation  
     
    Bookmark   71 citations  
  37.  16
    Radical Existentialist Exercise.Jasper Doomen - 2021 - Voices in Bioethics 7.
    Photo by Alex Guillaume on Unsplash Introduction The problem of climate change raises some important philosophical, existential questions. I propose a radical solution designed to provoke reflection on the role of humans in climate change. To push the theoretical limits of what measures people are willing to accept to combat it, an extreme population control tool is proposed: allowing people to reproduce only if they make a financial commitment guaranteeing a carbon-neutral upbringing. Solving the problem of climate change (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  38.  46
    Explosive Population Growth in Tropical Africa: Crucial Omission in Development Forecasts—Emerging Risks and Way Out.Julia Zinkina & Andrey Korotayev - 2014 - World Futures 70 (2):120-139.
    (2014). Explosive Population Growth in Tropical Africa: Crucial Omission in Development Forecasts—Emerging Risks and Way Out. World Futures: Vol. 70, No. 2, pp. 120-139.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  39. The End of the World: The Science and Ethics of Human Extinction.John Leslie - 1996 - Routledge.
    Are we in imminent danger of extinction? Yes, we probably are, argues John Leslie in his chilling account of the dangers facing the human race as we approach the second millenium. The End of the World is a sobering assessment of the many disasters that scientists have predicted and speculated on as leading to apocalypse. In the first comprehensive survey, potential catastrophes - ranging from deadly diseases to high-energy physics experiments - are explored to help us understand the (...). One of the greatest threats facing humankind, however, is the insurmountable fact that we are a relatively young species, a risk which is at the heart of the 'Doomsday Argument'. This argument, if correct, makes the dangers we face more serious than we could have ever imagined. This more than anything makes the arrogance and ignorance of politicians, and indeed philosophers, so disturbing as they continue to ignore the manifest dangers facing future generations. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   29 citations  
  40. Non-Additive Axiologies in Large Worlds.Christian J. Tarsney & Teruji Thomas - 2020
    Is the overall value of a world just the sum of values contributed by each value-bearing entity in that world? Additively separable axiologies (like total utilitarianism, prioritarianism, and critical level views) say 'yes', but non-additive axiologies (like average utilitarianism, rank-discounted utilitarianism, and variable value views) say 'no'. This distinction is practically important: additive axiologies support 'arguments from astronomical scale' which suggest (among other things) that it is overwhelmingly important for humanity to avoid premature extinction and ensure the existence of a (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  41. Existential risk from AI and orthogonality: Can we have it both ways?Vincent C. Müller & Michael Cannon - 2021 - Ratio 35 (1):25-36.
    The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  42.  29
    Human population growth: Local dynamics-global effects.Frank Dochy - 1995 - Acta Biotheoretica 43 (3):241-247.
    This communication presents a very simple model for the global growth of the human population. It is shown that the solution of the simple equation.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  43. Expanding the Duty to Rescue to Climate Migration.David N. Hoffman, Anne Zimmerman, Camille Castelyn & Srajana Kaikini - 2022 - Voices in Bioethics 8.
    Photo by Jonathan Ford on Unsplash ABSTRACT Since 2008, an average of twenty million people per year have been displaced by weather events. Climate migration creates a special setting for a duty to rescue. A duty to rescue is a moral rather than legal duty and imposes on a bystander to take an active role in preventing serious harm to someone else. This paper analyzes the idea of expanding a duty to rescue to climate migration. We address who should have (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  44.  16
    Human enhancement and technological uncertainty : Essays on the promise and peril of emerging technology.Karim Jebari - 2014 - Dissertation, Royal Institute of Technology, Stockholm
    Essay I explores brain machine interface technologies. These make direct communication between the brain and a machine possible by means of electrical stimuli. This essay reviews the existing and emerging technologies in this field and offers an inquiry into the ethical problems that are likely to emerge. Essay II, co-written with professor Sven-Ove Hansson, presents a novel procedure to engage the public in deliberations on the potential impacts of technology. This procedure, convergence seminar, is a form of scenario-based discussion that (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  45. Reducing the Risk of Human Extinction.Jason G. Matheny - unknown
    In this century a number of events could extinguish humanity. The probability of these events may be very low, but the expected value of preventing them could be high, as it represents the value of all future human lives. We review the challenges to studying human extinction risks and, by way of example, estimate the cost effectiveness of preventing extinction-level asteroid impacts.
     
    Export citation  
     
    Bookmark   15 citations  
  46.  14
    Human nature and the feasibility of inclusivist moral progress.Andrés Segovia-Cuéllar - 2022 - Dissertation, Ludwig Maximilians Universität, München
    The study of social, ethical, and political issues from a naturalistic perspective has been pervasive in social sciences and the humanities in the last decades. This articulation of empirical research with philosophical and normative reflection is increasingly getting attention in academic circles and the public spheres, given the prevalence of urgent needs and challenges that society is facing on a global scale. The contemporary world is full of challenges or what some philosophers have called ‘existential risks’ to (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47.  31
    Political ecology: global and local.Roger Keil (ed.) - 1998 - New York: Routledge.
    This collection is drawn from a recent Global Political conference held to mark the centenary of the birth of Harold Innis, Canada's most important political economist. Throughout his life, Innis was concerned with topics which remain central to political ecology today, such as the link between culture and nature, the impact of humanity on the environment and the role of technology and communications. In this volume, the contributors address environmental issues which Innes was concerened with, from a contemporary, political (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  48. COVID-19 PANDEMIC AS AN INDICATOR OF EXISTENTIAL EVOLUTIONARY RISK OF ANTHROPOCENE (ANTHROPOLOGICAL ORIGIN AND GLOBAL POLITICAL MECHANISMS).Valentin Cheshko & Konnova Nina - 2021 - In MOChashin O. Kristal (ed.), Bioethics: from theory to practice. pp. 29-44.
    The coronavirus pandemic, like its predecessors - AIDS, Ebola, etc., is evidence of the evolutionary instability of the socio-cultural and ecological niche created by mankind, as the main factor in the evolutionary success of our biological species and the civilization created by it. At least, this applies to the modern global civilization, which is called technogenic or technological, although it exists in several varieties. As we hope to show, the current crisis has less ontological as well as epistemological roots; (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  49. The Philosophy of Inquiry and Global Problems: The Intellectual Revolution Needed to Create a Better World.Nicholas Maxwell - 2024 - London: Palgrave-Macmillan.
    Bad philosophy is responsible for the climate and nature crises, and other global problems too that threaten our future. That sounds mad, but it is true. A philosophy of science, or of theatre or life is a view about what are, or ought to be, the aims and methods of science, theatre or life. It is in this entirely legitimate sense of “philosophy” that bad philosophy is responsible for the crises we face. First, and in a blatantly obvious way, (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  50.  26
    Toby Ord, The Precipice. Existential Risk and the Future of Humanity. Bloomsbury Publishing, 2020, 480 s.Markéta Poledníková - 2020 - Pro-Fil 21 (1):91.
    Recenze knihy:Toby Ord, The Precipice. Existential Risk and the Future of Humanity. Bloomsbury Publishing, 2020, 480 s.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 999