Results for 'superintelligence '

145 found
Order:
  1. Is superintelligence necessarily moral?Leonard Dung - forthcoming - Analysis.
    Numerous authors have expressed concern that advanced artificial intelligence (AI) poses an existential risk to humanity. These authors argue that we might build AI which is vastly intellectually superior to humans (a ‘superintelligence’), and which optimizes for goals that strike us as morally bad, or even irrational. Thus, this argument assumes that a superintelligence might have morally bad goals. However, according to some views, a superintelligence necessarily has morally adequate goals. This might be the case either because (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  2.  46
    Superintelligence: paths, dangers, strategies.Nick Bostrom (ed.) - 2014 - Oxford University Press.
    The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate (...)
    No categories
  3. Superintelligence as superethical.Steve Petersen - 2017 - In Patrick Lin, Keith Abney & Ryan Jenkins (eds.), Robot Ethics 2. 0: New Challenges in Philosophy, Law, and Society. Oxford University Press. pp. 322-337.
    Nick Bostrom's book *Superintelligence* outlines a frightening but realistic scenario for human extinction: true artificial intelligence is likely to bootstrap itself into superintelligence, and thereby become ideally effective at achieving its goals. Human-friendly goals seem too abstract to be pre-programmed with any confidence, and if those goals are *not* explicitly favorable toward humans, the superintelligence will extinguish us---not through any malice, but simply because it will want our resources for its own purposes. In response I argue that (...)
    Direct download  
     
    Export citation  
     
    Bookmark   6 citations  
  4.  45
    The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents. [REVIEW]Nick Bostrom - 2012 - Minds and Machines 22 (2):71-85.
    This paper discusses the relation between intelligence and motivation in artificial agents, developing and briefly arguing for two theses. The first, the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal. The second, the instrumental convergence thesis, holds that as long as they possess a sufficient level of intelligence, agents having (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   38 citations  
  5.  12
    Artificial superintelligence and its limits: why AlphaZero cannot become a general agent.Karim Jebari & Joakim Lundborg - forthcoming - AI and Society.
    An intelligent machine surpassing human intelligence across a wide set of skills has been proposed as a possible existential catastrophe. Among those concerned about existential risk related to artificial intelligence, it is common to assume that AI will not only be very intelligent, but also be a general agent. This article explores the characteristics of machine agency, and what it would mean for a machine to become a general agent. In particular, it does so by articulating some important differences between (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  6.  14
    The problem of superintelligence: political, not technological.Wolfhart Totschnig - 2019 - AI and Society 34 (4):907-920.
    The thinkers who have reflected on the problem of a coming superintelligence have generally seen the issue as a technological problem, a problem of how to control what the superintelligence will do. I argue that this approach is probably mistaken because it is based on questionable assumptions about the behavior of intelligent agents and, moreover, potentially counterproductive because it might, in the end, bring about the existential catastrophe that it is meant to prevent. I contend that the problem (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  7.  21
    Are superintelligent robots entitled to human rights?John-Stewart Gordon - 2022 - Ratio 35 (3):181-193.
  8. Superintelligence: Fears, Promises and Potentials.Ben Goertzel - 2015 - Journal of Evolution and Technology 25 (2):55-87.
    Oxford philosopher Nick Bostrom; in his recent and celebrated book Superintelligence; argues that advanced AI poses a potentially major existential risk to humanity; and that advanced AI development should be heavily regulated and perhaps even restricted to a small set of government-approved researchers. Bostrom’s ideas and arguments are reviewed and explored in detail; and compared with the thinking of three other current thinkers on the nature and implications of AI: Eliezer Yudkowsky of the Machine Intelligence Research Institute ; and (...)
    No categories
     
    Export citation  
     
    Bookmark   4 citations  
  9.  20
    Superintelligence and the Future of Governance: On Prioritizing the Control Problem at the End of History.Phil Torres - 2018 - In Yampolskiy Roman (ed.), Artificial Intelligence Safety and Security. CRC Press.
    This chapter argues that dual-use emerging technologies are distributing unprecedented offensive capabilities to nonstate actors. To counteract this trend, some scholars have proposed that states become a little “less liberal” by implementing large-scale surveillance policies to monitor the actions of citizens. This is problematic, though, because the distribution of offensive capabilities is also undermining states’ capacity to enforce the rule of law. I will suggest that the only plausible escape from this conundrum, at least from our present vantage point, is (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  10.  10
    Superintelligence: Paths, Dangers, Strategies vol. 1.Nick Bostrom - 2014 - Oxford University Press; 1st edition.
    The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  11. Friendly Superintelligent AI: All You Need is Love.Michael Prinzing - 2012 - In Vincent C. Müller (ed.), The Philosophy & Theory of Artificial Intelligence. Springer. pp. 288-301.
    There is a non-trivial chance that sometime in the (perhaps somewhat distant) future, someone will build an artificial general intelligence that will surpass human-level cognitive proficiency and go on to become "superintelligent", vastly outperforming humans. The advent of superintelligent AI has great potential, for good or ill. It is therefore imperative that we find a way to ensure-long before one arrives-that any superintelligence we build will consistently act in ways congenial to our interests. This is a very difficult challenge (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12. Superintelligence and Singularity.Ray Kurzweil - 2009 - In Susan Schneider (ed.), Science Fiction and Philosophy: From Time Travel to Superintelligence. Wiley-Blackwell. pp. 201--24.
     
    Export citation  
     
    Bookmark  
  13.  57
    How long before superintelligence?Nick Bostrom - 1998 - International Journal of Futures Studies 2.
    _This paper outlines the case for believing that we will have superhuman artificial intelligence_ _within the first third of the next century. It looks at different estimates of the processing power of_ _the human brain; how long it will take until computer hardware achieve a similar performance;_ _ways of creating the software through bottom-up approaches like the one used by biological_ _brains; how difficult it will be for neuroscience figure out enough about how brains work to_ _make this approach work; (...)
    Direct download  
     
    Export citation  
     
    Bookmark   34 citations  
  14.  15
    Superintelligence and Singularity.Ray Kurzweil - 2009 - In Susan Schneider (ed.), Science Fiction and Philosophy: From Time Travel to Superintelligence. Wiley-Blackwell. pp. 146–170.
    Singularity is a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. This chapter argues that within several decades information‐ based technologies will encompass all human knowledge and proficiency, ultimately including the pattern‐recognition powers, problem‐solving skills, and emotional and moral intelligence of the human brain itself. The Singularity will allow us to transcend these limitations of our biological bodies and brains. Most long‐range forecasts of what (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  15. Superintelligence AI and Skepticism.Joseph Corabi - 2017 - Journal of Evolution and Technology 27 (1):4-23.
    It has become fashionable to worry about the development of superintelligent AI that results in the destruction of humanity. This worry is not without merit; but it may be overstated. This paper explores some previously undiscussed reasons to be optimistic that; even if superintelligent AI does arise; it will not destroy us. These have to do with the possibility that a superintelligent AI will become mired in skeptical worries that its superintelligence cannot help it to solve. I argue that (...)
    No categories
     
    Export citation  
     
    Bookmark  
  16. Friendly Superintelligent AI: All You Need Is Love.Michael Prinzing - 2017 - In Vincent C. Müller (ed.), Philosophy and theory of artificial intelligence 2017. Berlin: Springer.
    There is a non-trivial chance that sometime in the future, someone will build an artificial general intelligence that will surpass human-level cognitive proficiency and go on to become “superintelligent”, vastly outperforming humans. The advent of superintelligent AI has great potential, for good or ill. It is therefore imperative that we find a way to ensure—long before one arrives—that any superintelligence we build will consistently act in ways congenial to our interests. This is a very difficult challenge in part because (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  17.  3
    Superintelligence and singularity Ray Kurzweil.Arthur Schopenhauer - 2009 - In Susan Schneider (ed.), Science Fiction and Philosophy: From Time Travel to Superintelligence. Wiley-Blackwell. pp. 60--201.
  18.  10
    The Singularity, Superintelligent Machines, and Mind Uploading: The Technological Future?Antonio Diéguez & Pablo García-Barranquero - 2023 - In Francisco Lara & Jan Deckers (eds.), Ethics of Artificial Intelligence. Springer Nature Switzerland. pp. 237-255.
    This chapter discusses the question of whether we will ever have an Artificial General Superintelligence (AGSI) and how it will affect our species if it does so. First, it explores various proposed definitions of AGSI and the potential implications of its emergence, including the possibility of collaboration or conflict with humans, its impact on our daily lives, and its potential for increased creativity and wisdom. The concept of the Singularity, which refers to the hypothetical future emergence of superintelligent machines (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  19.  9
    The Attribute of Superintelligence (Fatānah) of the Prophets in terms of Using Reason Properly.Mustafa Sönmez - 2022 - Kader 20 (2):723-744.
    Reason and revelation are two important guides for humanity. People can find the right path only thanks to these two guides. Fatānah is the peak of functional intelligence, which is a special attribute given to prophets. At the same time, this attribute is the ability and competence to use reason in the best way. Needless to say, this attribute comes along with great values and privileges. This prophetic attribute not only reflects the human aspect of the prophets but also points (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  20. Will Superintelligence Lead to Spiritual Enhancement?Ted Peters - 2022 - Religions 13 (5):399.
    If we human beings are successful at enhancing our intelligence through technology, will this count as spiritual advance? No. Intelligence alone— whether what we are born with or what is superseded by artificial intelligence or intelligence amplification— has no built-in moral compass. Christian spirituality values love more highly than intelligence, because love orients us toward God, toward the welfare of the neighbor, and toward the common good. Spiritual advance would require orienting our enhanced intelligence toward loving God and neighbor with (...)
    No categories
     
    Export citation  
     
    Bookmark  
  21.  12
    Superintelligence: Paths, Dangers, Strategies.Tim Mulgan - forthcoming - Philosophical Quarterly:pqv034.
  22.  7
    The revelation of superintelligence.Konrad Szocik, Bartłomiej Tkacz & Patryk Gulczyński - 2020 - AI and Society 35 (3):755-758.
    The idea of superintelligence is a source of mainly philosophical and ethical considerations. Those considerations are rooted in the idea that an entity which is more intelligent than humans, may evolve in some point in the future. For obvious reasons, the superintelligence is considered as a kind of existential threat for humanity. In this essay, we discuss two ideas. One of them is the putative nature of future superintelligence which does not necessary need to be harmful for (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23.  14
    The Control Problem. Excerpts from Superintelligence: Paths, Dangers, Strategies.Nick Bostrom - 2009 - In Susan Schneider (ed.), Science Fiction and Philosophy: From Time Travel to Superintelligence. Wiley-Blackwell. pp. 308–330.
    This chapter analyzes the control problem, the unique principal‐agent problem that arises with the creation of an artificial superintelligent agent. It distinguishes two broad classes of potential methods for addressing this problem, capability control and motivation selection, and examines several specific techniques within each class. It also alludes to the esoteric possibility of “anthropic capture”. Capability control methods seek to prevent undesirable outcomes by limiting what the superintelligence can do. This might involve boxing methods or incentive methods, stunting method. (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  24. Don't Worry about Superintelligence.Nicholas Agar - 2016 - Journal of Evolution and Technology 26 (1):73-82.
    This paper responds to Nick Bostrom’s suggestion that the threat of a human-unfriendly superintelligenceshould lead us to delay or rethink progress in AI. I allow that progress in AI presents problems that we are currently unable to solve. However; we should distinguish between currently unsolved problems for which there are rational expectations of solutions and currently unsolved problems for which no such expectation is appropriate. The problem of a human-unfriendly superintelligence belongs to the first category. It is rational to (...)
    No categories
     
    Export citation  
     
    Bookmark   1 citation  
  25.  2
    Ethical guidelines for a superintelligence.Ernest Davis - 2015 - Artificial Intelligence 220:121-124.
  26.  10
    Answering Divine Love: Human Distinctiveness in the Light of Islam and Artificial Superintelligence.Yusuf Çelik - 2023 - Sophia 62 (4):679-696.
    In the Qur’an, human distinctiveness was first questioned by angels. These established denizens of the cosmos could not understand why God would create a seemingly pernicious human when immaculate devotees of God such as themselves existed. In other words, the angels asked the age-old question: what makes humans so special and different? Fast forward to our present age and this question is made relevant again in light of the encroaching arrival of an artificial superintelligence (ASI). Up to this point (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  27.  12
    Optimising peace through a Universal Global Peace Treaty to constrain the risk of war from a militarised artificial superintelligence.Elias G. Carayannis & John Draper - 2023 - AI and Society 38 (6):2679-2692.
    This article argues that an artificial superintelligence (ASI) emerging in a world where war is still normalised constitutes a catastrophic existential risk, either because the ASI might be employed by a nation–state to war for global domination, i.e., ASI-enabled warfare, or because the ASI wars on behalf of itself to establish global domination, i.e., ASI-directed warfare. Presently, few states declare war or even war on each other, in part due to the 1945 UN Charter, which states Member States should (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  28.  1
    On quantum computing for artificial superintelligence.Anna Grabowska & Artur Gunia - 2024 - European Journal for Philosophy of Science 14 (2):1-30.
    Artificial intelligence algorithms, fueled by continuous technological development and increased computing power, have proven effective across a variety of tasks. Concurrently, quantum computers have shown promise in solving problems beyond the reach of classical computers. These advancements have contributed to a misconception that quantum computers enable hypercomputation, sparking speculation about quantum supremacy leading to an intelligence explosion and the creation of superintelligent agents. We challenge this notion, arguing that current evidence does not support the idea that quantum technologies enable hypercomputation. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  29.  9
    What overarching ethical principle should a superintelligent AI follow?Atle Ottesen Søvik - 2022 - AI and Society 37 (4):1505-1518.
    What is the best overarching ethical principle to give a possible future superintelligent machine, given that we do not know what the best ethics are today or in the future? Eliezer Yudkowsky has suggested that a superintelligent AI should have as its goal to carry out the coherent extrapolated volition of humanity (CEV), the most coherent way of combining human goals. The article discusses some problems with this proposal and some alternatives suggested by Nick Bostrom. A slightly different proposal is (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  30. Nick Bostrom: Superintelligence: Paths, Dangers, Strategies: Oxford University Press, Oxford, 2014, xvi+328, £18.99, ISBN: 978-0-19-967811-2. [REVIEW]Paul D. Thorn - 2015 - Minds and Machines 25 (3):285-289.
  31.  3
    Superintelligence: Paths, Dangers, Strategies. By Nick Bostrom. Oxford University Press, Oxford, 2014, pp. xvi+328. Hardcover: $29.95/ £18.99. ISBN: 9780199678112. [REVIEW]Sheldon Richmond - 2016 - Philosophy 91 (1):125-130.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  32. What Is Empathetic Superintelligence?David Pearce - unknown
    Our current conception of intelligence as measured by IQ tests is “mind-blind”. IQ tests lack ecological validity because they ignore social cognition – the “mindreading” prowess that enabled one species of social primate to become the most cognitively successful on the planet. In this talk, I shall examine how to correct the ethnocentric and anthropocentric biases of our perspective-taking abilities. What future technologies can enrich our capacity to understand other minds? I shall also discuss obstacles to building empathetic AGI (artificial (...)
     
    Export citation  
     
    Bookmark  
  33.  9
    Superethics Instead of Superintelligence: Know Thyself, and Apply Science Accordingly.Pim Haselager & Giulio Mecacci - 2020 - American Journal of Bioethics Neuroscience 11 (2):113-119.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34.  14
    Science Fiction and Philosophy: From Time Travel to Superintelligence.Susan Schneider (ed.) - 2009 - Wiley-Blackwell.
    A timely volume that uses science fiction as a springboard to meaningful philosophical discussions, especially at points of contact between science fiction and new scientific developments. Raises questions and examines timely themes concerning the nature of the mind, time travel, artificial intelligence, neural enhancement, free will, the nature of persons, transhumanism, virtual reality, and neuroethics Draws on a broad range of books, films and television series, including _The Matrix, Star Trek, Blade Runner, Frankenstein, Brave New World, The Time Machine,_ and (...)
    Direct download  
     
    Export citation  
     
    Bookmark   6 citations  
  35.  4
    Risk management standards and the active management of malicious intent in artificial superintelligence.Patrick Bradley - 2020 - AI and Society 35 (2):319-328.
    The likely near future creation of artificial superintelligence carries significant risks to humanity. These risks are difficult to conceptualise and quantify, but malicious use of existing artificial intelligence by criminals and state actors is already occurring and poses risks to digital security, physical security and integrity of political systems. These risks will increase as artificial intelligence moves closer to superintelligence. While there is little research on risk management tools used in artificial intelligence development, the current global standard for (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36. Proclus, henads and apxai in the superintelligible world.C. Dancona - 1992 - Rivista di Storia Della Filosofia 47 (2):265-294.
     
    Export citation  
     
    Bookmark  
  37.  17
    In Algorithms We Trust: Magical Thinking, Superintelligent Ai and Quantum Computing.Nathan Schradle - 2020 - Zygon 55 (3):733-747.
    This article analyzes current attitudes toward artificial intelligence (AI) and quantum computing and argues that they represent a modern‐day form of magical thinking. It proposes that AI and quantum computing are thus excellent examples of the ways that traditional distinctions between religion, science, and magic fail to account for the vibrancy and energy that surround modern technologies.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  38.  4
    Appearance in this list neither guarantees nor precludes a future review of the book. Abdoullaev, Azamat, Artificial Superintelligence, Moscow, Russia, EIS Encyclopedic Intelligent Systems, Ltd., 1999, pp. 184. Adams, Robert Merrihew, Finite and Infinite Goods, Oxford, UK, Oxford University Press, 1999, pp. 410,£ 35.00. [REVIEW]Theodor Adorno & Walter Benjamin - 1999 - Mind 108:432.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  39.  6
    The ethics of artificial intelligence: superintelligence, life 3.0 and robot rights.Kati Tusinski Berg - 2018 - Journal of Media Ethics 33 (3):151-153.
  40. Why AI Doomsayers are Like Sceptical Theists and Why it Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  41. Everything and More: The Prospects of Whole Brain Emulation.Eric Mandelbaum - 2022 - Journal of Philosophy 119 (8):444-459.
    Whole Brain Emulation has been championed as the most promising, well-defined route to achieving both human-level artificial intelligence and superintelligence. It has even been touted as a viable route to achieving immortality through brain uploading. WBE is not a fringe theory: the doctrine of Computationalism in philosophy of mind lends credence to the in-principle feasibility of the idea, and the standing of the Human Connectome Project makes it appear to be feasible in practice. Computationalism is a popular, independently plausible (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  42.  27
    Thinking Inside the Box: Controlling and Using an Oracle AI.Stuart Armstrong, Anders Sandberg & Nick Bostrom - 2012 - Minds and Machines 22 (4):299-324.
    There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  43. Existential risk from AI and orthogonality: Can we have it both ways?Vincent C. Müller & Michael Cannon - 2021 - Ratio 35 (1):25-36.
    The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot be (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  44. Future progress in artificial intelligence: A survey of expert opinion.Vincent C. Müller & Nick Bostrom - 2016 - In Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence. Cham: Springer. pp. 553-571.
    There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. We thus (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   39 citations  
  45.  14
    The Singularity.David J. Chalmers - 2009 - In Susan Schneider (ed.), Science Fiction and Philosophy: From Time Travel to Superintelligence. Wiley-Blackwell. pp. 171–224.
    This chapter provides a rich philosophical discussion of superintelligence, a widely discussed piece that has encouraged philosophers of mind to take transhumanism, mind uploading, and the singularity more seriously. It starts with the argument for a singularity: is there good reason to believe that there will be an intelligence explosion? Next, the chapter considers how to negotiate the singularity: if it is possible that there will be a singularity, how can we maximize the chances of a good outcome? Finally, (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  46. Machines learning values.Steve Petersen - 2023 - In Francisco Lara & Jan Deckers (eds.), Ethics of Artificial Intelligence. Springer Nature Switzerland.
    Whether it would take one decade or several centuries, many agree that it is possible to create a *superintelligence*---an artificial intelligence with a godlike ability to achieve its goals. And many who have reflected carefully on this fact agree that our best hope for a "friendly" superintelligence is to design it to *learn* values like ours, since our values are too complex to program or hardwire explicitly. But the value learning approach to AI safety faces three particularly philosophical (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  47.  11
    Why Friendly AIs won’t be that Friendly: A Friendly Reply to Muehlhauser and Bostrom.Robert James M. Boyles & Jeremiah Joven Joaquin - 2020 - AI and Society 35 (2):505–507.
    In “Why We Need Friendly AI”, Luke Muehlhauser and Nick Bostrom propose that for our species to survive the impending rise of superintelligent AIs, we need to ensure that they would be human-friendly. This discussion note offers a more natural but bleaker outlook: that in the end, if these AIs do arise, they won’t be that friendly.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  48. Editorial: Risks of general artificial intelligence.Vincent C. Müller - 2014 - Journal of Experimental and Theoretical Artificial Intelligence 26 (3):297-301.
    This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, what (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  49.  71
    Theory and philosophy of AI (Minds and Machines, 22/2 - Special volume).Vincent C. Müller (ed.) - 2012 - Springer.
    Invited papers from PT-AI 2011. - Vincent C. Müller: Introduction: Theory and Philosophy of Artificial Intelligence - Nick Bostrom: The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents - Hubert L. Dreyfus: A History of First Step Fallacies - Antoni Gomila, David Travieso and Lorena Lobo: Wherein is Human Cognition Systematic - J. Kevin O'Regan: How to Build a Robot that Is Conscious and Feels - Oron Shagrir: Computation, Implementation, Cognition.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  50.  7
    Alien Minds.Susan Schneider - 2009 - In Science Fiction and Philosophy: From Time Travel to Superintelligence. Wiley-Blackwell. pp. 225–242.
    This chapter first explains why it is likely that the alien civilizations we encounter will be forms of superintelligent artificial intelligence (SAI). Next, it turns to the question of whether superintelligent aliens can be conscious – whether it feels a certain way to be an alien, despite their non‐biological nature. The chapter draws from the literature in philosophy of AI, and urges that although we cannot be certain that superintelligent aliens can be conscious, it is likely that they would be. (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 145