Results for 'superintelligence'

98 found
Order:
  1. Superintelligence: paths, dangers, strategies.Nick Bostrom (ed.) - 2014 - Oxford University Press.
    The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate (...)
  2. Friendly Superintelligent AI: All You Need is Love.Michael Prinzing - 2017 - In Vincent C. Müller (ed.), The Philosophy & Theory of Artificial Intelligence. Berlin: Springer. pp. 288-301.
    There is a non-trivial chance that sometime in the (perhaps somewhat distant) future, someone will build an artificial general intelligence that will surpass human-level cognitive proficiency and go on to become "superintelligent", vastly outperforming humans. The advent of superintelligent AI has great potential, for good or ill. It is therefore imperative that we find a way to ensure-long before one arrives-that any superintelligence we build will consistently act in ways congenial to our interests. This is a very difficult challenge (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3. The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents. [REVIEW]Nick Bostrom - 2012 - Minds and Machines 22 (2):71-85.
    This paper discusses the relation between intelligence and motivation in artificial agents, developing and briefly arguing for two theses. The first, the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal. The second, the instrumental convergence thesis, holds that as long as they possess a sufficient level of intelligence, agents having (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   28 citations  
  4. Superintelligence as superethical.Steve Petersen - 2017 - In Patrick Lin, Keith Abney & Ryan Jenkins (eds.), Robot Ethics 2. 0: New Challenges in Philosophy, Law, and Society. New York, USA: Oxford University Press. pp. 322-337.
    Nick Bostrom's book *Superintelligence* outlines a frightening but realistic scenario for human extinction: true artificial intelligence is likely to bootstrap itself into superintelligence, and thereby become ideally effective at achieving its goals. Human-friendly goals seem too abstract to be pre-programmed with any confidence, and if those goals are *not* explicitly favorable toward humans, the superintelligence will extinguish us---not through any malice, but simply because it will want our resources for its own purposes. In response I argue that (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  5.  31
    Artificial superintelligence and its limits: why AlphaZero cannot become a general agent.Karim Jebari & Joakim Lundborg - forthcoming - AI and Society.
    An intelligent machine surpassing human intelligence across a wide set of skills has been proposed as a possible existential catastrophe. Among those concerned about existential risk related to artificial intelligence, it is common to assume that AI will not only be very intelligent, but also be a general agent. This article explores the characteristics of machine agency, and what it would mean for a machine to become a general agent. In particular, it does so by articulating some important differences between (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  6.  32
    Are superintelligent robots entitled to human rights?John-Stewart Gordon - 2022 - Ratio 35 (3):181-193.
  7. Superintelligence: Fears, Promises and Potentials.Ben Goertzel - 2015 - Journal of Evolution and Technology 25 (2):55-87.
    Oxford philosopher Nick Bostrom; in his recent and celebrated book Superintelligence; argues that advanced AI poses a potentially major existential risk to humanity; and that advanced AI development should be heavily regulated and perhaps even restricted to a small set of government-approved researchers. Bostrom’s ideas and arguments are reviewed and explored in detail; and compared with the thinking of three other current thinkers on the nature and implications of AI: Eliezer Yudkowsky of the Machine Intelligence Research Institute ; and (...)
    No categories
     
    Export citation  
     
    Bookmark   3 citations  
  8.  36
    Superintelligence: Paths, Dangers, Strategies. By Nick Bostrom. Oxford University Press, Oxford, 2014, pp. xvi+328. Hardcover: $29.95/ £18.99. ISBN: 9780199678112. [REVIEW]Sheldon Richmond - 2016 - Philosophy 91 (1):125-130.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  9. Superintelligence and the Future of Governance: On Prioritizing the Control Problem at the End of History.Phil Torres - forthcoming - In Roman Yampolskiy (ed.), Artificial Intelligence Safety and Security. CRC Press.
    This chapter argues that dual-use emerging technologies are distributing unprecedented offensive capabilities to nonstate actors. To counteract this trend, some scholars have proposed that states become a little “less liberal” by implementing large-scale surveillance policies to monitor the actions of citizens. This is problematic, though, because the distribution of offensive capabilities is also undermining states’ capacity to enforce the rule of law. I will suggest that the only plausible escape from this conundrum, at least from our present vantage point, is (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  10. Superintelligence AI and Skepticism.Joseph Corabi - 2017 - Journal of Evolution and Technology 27 (1):4-23.
    It has become fashionable to worry about the development of superintelligent AI that results in the destruction of humanity. This worry is not without merit; but it may be overstated. This paper explores some previously undiscussed reasons to be optimistic that; even if superintelligent AI does arise; it will not destroy us. These have to do with the possibility that a superintelligent AI will become mired in skeptical worries that its superintelligence cannot help it to solve. I argue that (...)
    No categories
     
    Export citation  
     
    Bookmark  
  11. Superintelligence and Singularity.Ray Kurzweil - 2009 - In Susan Schneider (ed.), Science Fiction and Philosophy: From Time Travel to Superintelligence. Wiley-Blackwell. pp. 201--24.
     
    Export citation  
     
    Bookmark  
  12. Don't Worry about Superintelligence.Nicholas Agar - 2016 - Journal of Evolution and Technology 26 (1):73-82.
    This paper responds to Nick Bostrom’s suggestion that the threat of a human-unfriendly superintelligenceshould lead us to delay or rethink progress in AI. I allow that progress in AI presents problems that we are currently unable to solve. However; we should distinguish between currently unsolved problems for which there are rational expectations of solutions and currently unsolved problems for which no such expectation is appropriate. The problem of a human-unfriendly superintelligence belongs to the first category. It is rational to (...)
    No categories
     
    Export citation  
     
    Bookmark   1 citation  
  13. The problem of superintelligence: political, not technological.Wolfhart Totschnig - 2019 - AI and Society 34 (4):907-920.
    The thinkers who have reflected on the problem of a coming superintelligence have generally seen the issue as a technological problem, a problem of how to control what the superintelligence will do. I argue that this approach is probably mistaken because it is based on questionable assumptions about the behavior of intelligent agents and, moreover, potentially counterproductive because it might, in the end, bring about the existential catastrophe that it is meant to prevent. I contend that the problem (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  14.  28
    The revelation of superintelligence.Konrad Szocik, Bartłomiej Tkacz & Patryk Gulczyński - 2020 - AI and Society 35 (3):755-758.
    The idea of superintelligence is a source of mainly philosophical and ethical considerations. Those considerations are rooted in the idea that an entity which is more intelligent than humans, may evolve in some point in the future. For obvious reasons, the superintelligence is considered as a kind of existential threat for humanity. In this essay, we discuss two ideas. One of them is the putative nature of future superintelligence which does not necessary need to be harmful for (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15. How long before superintelligence?Nick Bostrom - 1998 - International Journal of Futures Studies 2.
    _This paper outlines the case for believing that we will have superhuman artificial intelligence_ _within the first third of the next century. It looks at different estimates of the processing power of_ _the human brain; how long it will take until computer hardware achieve a similar performance;_ _ways of creating the software through bottom-up approaches like the one used by biological_ _brains; how difficult it will be for neuroscience figure out enough about how brains work to_ _make this approach work; (...)
    Direct download  
     
    Export citation  
     
    Bookmark   31 citations  
  16. Will Superintelligence Lead to Spiritual Enhancement?Ted Peters - 2022 - Religions 13 (5):399.
    If we human beings are successful at enhancing our intelligence through technology, will this count as spiritual advance? No. Intelligence alone— whether what we are born with or what is superseded by artificial intelligence or intelligence amplification— has no built-in moral compass. Christian spirituality values love more highly than intelligence, because love orients us toward God, toward the welfare of the neighbor, and toward the common good. Spiritual advance would require orienting our enhanced intelligence toward loving God and neighbor with (...)
    No categories
     
    Export citation  
     
    Bookmark  
  17.  74
    Superintelligence: Paths, Dangers, Strategies.Tim Mulgan - forthcoming - Philosophical Quarterly:pqv034.
  18.  32
    Superintelligence and singularity Ray Kurzweil.Arthur Schopenhauer - 2009 - In Susan Schneider (ed.), Science Fiction and Philosophy: From Time Travel to Superintelligence. Wiley-Blackwell. pp. 60--201.
  19.  6
    Ethical guidelines for a superintelligence.Ernest Davis - 2015 - Artificial Intelligence 220:121-124.
  20.  19
    Superethics Instead of Superintelligence: Know Thyself, and Apply Science Accordingly.Pim Haselager & Giulio Mecacci - 2020 - American Journal of Bioethics Neuroscience 11 (2):113-119.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21. Nick Bostrom: Superintelligence: Paths, Dangers, Strategies: Oxford University Press, Oxford, 2014, xvi+328, £18.99, ISBN: 978-0-19-967811-2. [REVIEW]Paul D. Thorn - 2015 - Minds and Machines 25 (3):285-289.
  22.  18
    What overarching ethical principle should a superintelligent AI follow?Atle Ottesen Søvik - 2022 - AI and Society 37 (4):1505-1518.
    What is the best overarching ethical principle to give a possible future superintelligent machine, given that we do not know what the best ethics are today or in the future? Eliezer Yudkowsky has suggested that a superintelligent AI should have as its goal to carry out the coherent extrapolated volition of humanity (CEV), the most coherent way of combining human goals. The article discusses some problems with this proposal and some alternatives suggested by Nick Bostrom. A slightly different proposal is (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  23. What Is Empathetic Superintelligence?David Pearce - unknown
    Our current conception of intelligence as measured by IQ tests is “mind-blind”. IQ tests lack ecological validity because they ignore social cognition – the “mindreading” prowess that enabled one species of social primate to become the most cognitively successful on the planet. In this talk, I shall examine how to correct the ethnocentric and anthropocentric biases of our perspective-taking abilities. What future technologies can enrich our capacity to understand other minds? I shall also discuss obstacles to building empathetic AGI (artificial (...)
     
    Export citation  
     
    Bookmark  
  24.  27
    Risk management standards and the active management of malicious intent in artificial superintelligence.Patrick Bradley - 2020 - AI and Society 35 (2):319-328.
    The likely near future creation of artificial superintelligence carries significant risks to humanity. These risks are difficult to conceptualise and quantify, but malicious use of existing artificial intelligence by criminals and state actors is already occurring and poses risks to digital security, physical security and integrity of political systems. These risks will increase as artificial intelligence moves closer to superintelligence. While there is little research on risk management tools used in artificial intelligence development, the current global standard for (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25.  12
    Optimising peace through a Universal Global Peace Treaty to constrain the risk of war from a militarised artificial superintelligence.Elias G. Carayannis & John Draper - forthcoming - AI and Society:1-14.
    This article argues that an artificial superintelligence emerging in a world where war is still normalised constitutes a catastrophic existential risk, either because the ASI might be employed by a nation–state to war for global domination, i.e., ASI-enabled warfare, or because the ASI wars on behalf of itself to establish global domination, i.e., ASI-directed warfare. Presently, few states declare war or even war on each other, in part due to the 1945 UN Charter, which states Member States should “refrain (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  26.  14
    Appearance in this list neither guarantees nor precludes a future review of the book. Abdoullaev, Azamat, Artificial Superintelligence, Moscow, Russia, EIS Encyclopedic Intelligent Systems, Ltd., 1999, pp. 184. Adams, Robert Merrihew, Finite and Infinite Goods, Oxford, UK, Oxford University Press, 1999, pp. 410,£ 35.00. [REVIEW]Theodor Adorno & Walter Benjamin - 1999 - Mind 108:432.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  27. Science Fiction and Philosophy: From Time Travel to Superintelligence.Susan Schneider (ed.) - 2009 - Wiley-Blackwell.
    A timely volume that uses science fiction as a springboard to meaningful philosophical discussions, especially at points of contact between science fiction and new scientific developments. Raises questions and examines timely themes concerning the nature of the mind, time travel, artificial intelligence, neural enhancement, free will, the nature of persons, transhumanism, virtual reality, and neuroethics Draws on a broad range of books, films and television series, including _The Matrix, Star Trek, Blade Runner, Frankenstein, Brave New World, The Time Machine,_ and (...)
    Direct download  
     
    Export citation  
     
    Bookmark   6 citations  
  28.  40
    In algorithms we trust: Magical thinking, superintelligent ai and quantum computing.Nathan Schradle - 2020 - Zygon 55 (3):733-747.
  29. Proclus, henads and apxai in the superintelligible world.C. Dancona - 1992 - Rivista di Storia Della Filosofia 47 (2):265-294.
     
    Export citation  
     
    Bookmark  
  30.  70
    The ethics of artificial intelligence: superintelligence, life 3.0 and robot rights.Kati Tusinski Berg - 2018 - Journal of Media Ethics 33 (3):151-153.
  31. Why AI Doomsayers are Like Sceptical Theists and Why it Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  32. Everything and More: The Prospects of Whole Brain Emulation.Eric Mandelbaum - 2022 - Journal of Philosophy 119 (8):444-459.
    Whole Brain Emulation has been championed as the most promising, well-defined route to achieving both human-level artificial intelligence and superintelligence. It has even been touted as a viable route to achieving immortality through brain uploading. WBE is not a fringe theory: the doctrine of Computationalism in philosophy of mind lends credence to the in-principle feasibility of the idea, and the standing of the Human Connectome Project makes it appear to be feasible in practice. Computationalism is a popular, independently plausible (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  33. Thinking Inside the Box: Controlling and Using an Oracle AI.Stuart Armstrong, Anders Sandberg & Nick Bostrom - 2012 - Minds and Machines 22 (4):299-324.
    There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  34. Future progress in artificial intelligence: A survey of expert opinion.Vincent C. Müller & Nick Bostrom - 2016 - In Vincent Müller (ed.), Fundamental Issues of Artificial Intelligence. Springer. pp. 553-571.
    There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. We thus (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   33 citations  
  35. Why Friendly AIs won’t be that Friendly: A Friendly Reply to Muehlhauser and Bostrom.Robert James M. Boyles & Jeremiah Joven Joaquin - 2020 - AI and Society 35 (2):505–507.
    In “Why We Need Friendly AI”, Luke Muehlhauser and Nick Bostrom propose that for our species to survive the impending rise of superintelligent AIs, we need to ensure that they would be human-friendly. This discussion note offers a more natural but bleaker outlook: that in the end, if these AIs do arise, they won’t be that friendly.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  36. Fundamental Issues of Artificial Intelligence.Vincent C. Müller (ed.) - 2016 - Cham: Springer.
    [Müller, Vincent C. (ed.), (2016), Fundamental issues of artificial intelligence (Synthese Library, 377; Berlin: Springer). 570 pp.] -- This volume offers a look at the fundamental issues of present and future AI, especially from cognitive science, computer science, neuroscience and philosophy. This work examines the conditions for artificial intelligence, how these relate to the conditions for intelligence in humans and other natural agents, as well as ethical and societal problems that artificial intelligence raises or will raise. The key issues this (...)
  37. Machines learning values.Steve Petersen - 2020 - In S. Matthew Liao (ed.), Ethics of Artificial Intelligence. New York, USA: Oxford University Press.
    Whether it would take one decade or several centuries, many agree that it is possible to create a *superintelligence*---an artificial intelligence with a godlike ability to achieve its goals. And many who have reflected carefully on this fact agree that our best hope for a "friendly" superintelligence is to design it to *learn* values like ours, since our values are too complex to program or hardwire explicitly. But the value learning approach to AI safety faces three particularly philosophical (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  38. Editorial: Risks of general artificial intelligence.Vincent C. Müller - 2014 - Journal of Experimental and Theoretical Artificial Intelligence 26 (3):297-301.
    This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, what (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  39. Theory and philosophy of AI (Minds and Machines, 22/2 - Special volume).Vincent C. Müller (ed.) - 2012 - Springer.
    Invited papers from PT-AI 2011. - Vincent C. Müller: Introduction: Theory and Philosophy of Artificial Intelligence - Nick Bostrom: The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents - Hubert L. Dreyfus: A History of First Step Fallacies - Antoni Gomila, David Travieso and Lorena Lobo: Wherein is Human Cognition Systematic - J. Kevin O'Regan: How to Build a Robot that Is Conscious and Feels - Oron Shagrir: Computation, Implementation, Cognition.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  40. Existential risk from AI and orthogonality: Can we have it both ways?Vincent C. Müller & Michael Cannon - 2021 - Ratio 35 (1):25-36.
    The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot be (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  41.  25
    Rebooting Ai: Building Artificial Intelligence We Can Trust.Gary Marcus & Ernest Davis - 2019 - Vintage.
    Two leaders in the field offer a compelling analysis of the current state of the art and reveal the steps we must take to achieve a truly robust artificial intelligence. Despite the hype surrounding AI, creating an intelligence that rivals or exceeds human levels is far more complicated than we have been led to believe. Professors Gary Marcus and Ernest Davis have spent their careers at the forefront of AI research and have witnessed some of the greatest milestones in the (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   9 citations  
  42.  44
    Two arguments against human-friendly AI.Ken Daley - forthcoming - AI and Ethics.
    The past few decades have seen a substantial increase in the focus on the myriad ethical implications of artificial intelligence. Included amongst the numerous issues is the existential risk that some believe could arise from the development of artificial general intelligence (AGI) which is an as-of-yet hypothetical form of AI that is able to perform all the same intellectual feats as humans. This has led to extensive research into how humans can avoid losing control of an AI that is at (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  43.  16
    Singularitarianism and schizophrenia.Vassilis Galanos - 2017 - AI and Society 32 (4):573-590.
    Given the contemporary ambivalent standpoints toward the future of artificial intelligence, recently denoted as the phenomenon of Singularitarianism, Gregory Bateson’s core theories of ecology of mind, schismogenesis, and double bind, are hereby revisited, taken out of their respective sociological, anthropological, and psychotherapeutic contexts and recontextualized in the field of Roboethics as to a twofold aim: the proposal of a rigid ethical standpoint toward both artificial and non-artificial agents, and an explanatory analysis of the reasons bringing about such a polarized outcome (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  44. Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2020 - In Edward Zalta (ed.), Stanford Encyclopedia of Philosophy. Palo Alto, Cal.: CSLI, Stanford University. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   24 citations  
  45.  47
    Why interdisciplinary research in AI is so important, according to Jurassic Park.Marie Oldfield - 2020 - The Tech Magazine 1 (1):1.
    Why interdisciplinary research in AI is so important, according to Jurassic Park. -/- “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.” -/- I think this quote resonates with us now more than ever, especially in the world of technological development. The writers of Jurassic Park were years ahead of their time with this powerful quote. -/- As we build new technology, and we push on to see what can actually (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  46. The hard problem of AI rights.Adam J. Andreotta - 2021 - AI and Society 36 (1):19-32.
    In the past few years, the subject of AI rights—the thesis that AIs, robots, and other artefacts (hereafter, simply ‘AIs’) ought to be included in the sphere of moral concern—has started to receive serious attention from scholars. In this paper, I argue that the AI rights research program is beset by an epistemic problem that threatens to impede its progress—namely, a lack of a solution to the ‘Hard Problem’ of consciousness: the problem of explaining why certain brain states give rise (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  47.  14
    Immortal Life and Eternity. On the Transhumanist Project of Immortality.Emilio José Justo Domínguez - 2019 - Scientia et Fides 7 (2):233-246.
    Some transhumanist authors make the prophecy of immortality thanks to the transfer of the human mind to a superintelligent computer that would guarantee the survival of the person. That immortality would mean a happy life. In this article we try to show that this supposed indefinite survival is not exactly what is usually understood by immortality. In addition, we try to think about what immortality is based on the theological understanding of eternity and personal communion in which the life of (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  48.  71
    Leakproofing the Singularity.Roman V. Yampolskiy - 2012 - Journal of Consciousness Studies 19 (1-2):194-214.
    This paper attempts to formalize and to address the ‘leakproofing’ of the Singularity problem presented by David Chalmers. The paper begins with the definition of the Artificial Intelligence Confinement Problem. After analysis of existing solutions and their shortcomings, a protocol is proposed aimed at making a more secure confinement environment which might delay potential negative effect from the technological singularity while allowing humanity to benefit from the superintelligence.
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  49. AAAI: an Argument Against Artificial Intelligence.Sander Beckers - 2017 - In Vincent Müller (ed.), Philosophy and theory of artificial intelligence 2017. Berlin: Springer. pp. 235-247.
    The ethical concerns regarding the successful development of an Artificial Intelligence have received a lot of attention lately. The idea is that even if we have good reason to believe that it is very unlikely, the mere possibility of an AI causing extreme human suffering is important enough to warrant serious consideration. Others look at this problem from the opposite perspective, namely that of the AI itself. Here the idea is that even if we have good reason to believe that (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  50. Ethical issues in advanced artificial intelligence.Nick Bostrom - manuscript
    The ethical issues related to the possible future creation of machines with general intellectual capabilities far outstripping those of humans are quite distinct from any ethical problems arising in current automation and information systems. Such superintelligence would not be just another technological development; it would be the most important invention ever made, and would lead to explosive progress in all scientific and technological fields, as the superintelligence would conduct research with superhuman efficiency. To the extent that ethics is (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   26 citations  
1 — 50 / 98