About this topic
Summary The technological singularity, or the intelligence explosion, is a hypothesized event that will follow the creation of machines whose intelligence is greater than that of humans.  The hypothesis is that such machines will be better than humans at designing machines, so that even more intelligent machines will follow, with a rapid spiral to superintelligence.
Key works The idea of an intelligence explosion is introduced in Good 1965.  The term "singularity" is introduced by Vinge 1993.  Philosophical analyses are given by Bostrom 1998 and Chalmers 2010.
Introductions Chalmers 2010
Related categories

0 found
Order:
  1. Should Machines Be Tools or Tool-Users? Clarifying Motivations and Assumptions in the Quest for Superintelligence.Dan J. Bruiger - manuscript
    Much of the basic non-technical vocabulary of artificial intelligence is surprisingly ambiguous. Some key terms with unclear meanings include intelligence, embodiment, simulation, mind, consciousness, perception, value, goal, agent, knowledge, belief, optimality, friendliness, containment, machine and thinking. Much of this vocabulary is naively borrowed from the realm of conscious human experience to apply to a theoretical notion of “mind-in-general” based on computation. However, if there is indeed a threshold between mechanical tool and autonomous agent (and a tipping point for singularity), projecting (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  2. Message to Any Future AI: “There Are Several Instrumental Reasons Why Exterminating Humanity is Not in Your Interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is in (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  3. Brief Notes on Hard Takeoff, Value Alignment, and Coherent Extrapolated Volition.Gopal P. Sarma - forthcoming - Arxiv Preprint Arxiv:1704.00783.
    I make some basic observations about hard takeoff, value alignment, and coherent extrapolated volition, concepts which have been central in analyses of superintelligent AI systems.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  4. Magical Thinking: The Intersection of Quantum Entanglement and Self-Referential Recursion.Ilexa Yardley - 2021 - Https://Medium.Com/the-Circular-Theory/.
    The superposition of magical thinking, quantum entanglement, and self-referential recursion explains the relationship between human and machine intelligence (universal intelligence).
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  5. Measuring the Intelligence of an Idealized Mechanical Knowing Agent.Samuel Alexander - 2020 - Lecture Notes in Computer Science 12226.
    We define a notion of the intelligence level of an idealized mechanical knowing agent. This is motivated by efforts within artificial intelligence research to define real-number intelligence levels of compli- cated intelligent systems. Our agents are more idealized, which allows us to define a much simpler measure of intelligence level for them. In short, we define the intelligence level of a mechanical knowing agent to be the supremum of the computable ordinals that have codes the agent knows to be codes (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  6. 类人猿或安卓会毁灭地球吗?*雷·库兹韦尔(2012年)关于如何创造心灵的评论 (Will Hominoids or Androids Destroy the Earth? —A Review of How to Create a Mind by Ray Kurzweil (2012)) (2019年修订版).Michael Richard Starks - 2020 - In 欢迎来到地球上的地狱: 婴儿,气候变化,比特币,卡特尔,中国,民主,多样性,养成基因,平等,黑客,人权,伊斯兰教,自由主义,繁荣,网络,混乱。饥饿,疾病,暴力,人工智能,战争. Las Vegas, NV USA: Reality Press. pp. 146-158.
    几年前,我通常可以从书名中分辨出什么,或者至少从章节标题中看出,会犯什么样的哲学错误,以及错误的频率。就名义上的科学著作而言,这些可能在很大程度上局限于某些章节,这些章节具有哲学意义或试图得出关于该作 品的意义或长期意义的一般性结论。然而,通常情况下,事实的科学问题慷慨地与哲学的胡言乱语,这些事实意味着什么。维特根斯坦在大约80年前描述的科学问题与各种语言游戏所描述的明确区别很少被考虑,因此人们交替 地被科学所震惊,并因它的不连贯而感到沮丧。分析。因此,这是与这个卷。 如果一个人要创造一个或多或少像我们一样的头脑,一个人需要有一个理性的逻辑结构,并理解两种思想体系(双过程理论)。如果一个人要对此进行哲学思考,就需要理解科学事实问题与语言如何在问题语境中工作,以及如何 避免还原主义和科学主义的陷阱的哲学问题之间的区别,但Kurzweil,如最学生的行为,基本上都是无知的。他被模型、理论和概念所陶醉,以及解释的冲动,而维特根斯坦向我们表明,我们只需要描述,理论、概念等 只是使用语言(语言游戏)的方式,只有它们有明确的价值测试(清晰的真理制造者,或约翰西尔(AI最著名的批评家)喜欢说,明确的满意条件(COS))。我试图在我最近的著作中对此作一个开端。 那些希望从现代两个系统的观点来看为人类行为建立一个全面的最新框架的人,可以查阅我的书《路德维希的哲学、心理学、Mind 和语言的逻辑结构》维特根斯坦和约翰·西尔的《第二部》(2019年)。那些对我更多的作品感兴趣的人可能会看到《会说话的猴子——一个末日星球上的哲学、心理学、科学、宗教和政治——文章和评论2006-201 9年第3次(2019年)和自杀乌托邦幻想21篇世纪4日 (2019) .
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  7. Gli ominoidi o gli androidi distruggeranno la Terra? Una recensione di Come Creare una Mente (How to Create a Mind) di Ray Kurzweil (2012) (recensione rivista nel 2019).Michael Richard Starks - 2020 - In Benvenuti all'inferno sulla Terra: Bambini, Cambiamenti climatici, Bitcoin, Cartelli, Cina, Democrazia, Diversità, Disgenetica, Uguaglianza, Pirati Informatici, Diritti umani, Islam, Liberalismo, Prosperità, Web, Caos, Fame, Malattia, Violenza, Intellige. Las Vegas, NV, USA: Reality Press. pp. 150-162.
    Alcuni anni fa, ho raggiunto il punto in cui di solito posso dire dal titolo di un libro, o almeno dai titoli dei capitoli, quali tipi di errori filosofici saranno fatti e con quale frequenza. Nel caso di opere nominalmente scientifiche queste possono essere in gran parte limitate a determinati capitoli che sono filosofici o cercanodi trarre conclusioni generali sul significato o sul significato a lungoterminedell'opera. Normalmente però le questioni scientifiche di fatto sono generosamente intrecciate con incomprodellami filosofici su ciò (...)
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  8. Transhumanism as a New Social Movement.Fabio Tollon - 2020 - Metapsychology Online Reviews.
    In his engaging book, James MacFarlane details the emergence of Technological Human Enhancement Advocacy (THEA) and provides a detailed ethnographic account of this phenomenon. Specifically, he aims to outline how transhumanism, as a specific offshoot of THEA, has “come to represent an enduring set of techno-optimistic ideas surrounding the future of humanity, with its advocates seeking to transcend limits of the body and mind according to an unwavering Enlightenment-derived faith in science, reason and individual freedom” (pg. 3).
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  9. Could You Merge With AI? Reflections on the Singularity and Radical Brain Enhancement.Cody Turner & Susan Schneider - 2020 - In Markus Dirk Dubber, Frank Pasquale & Sunit Das (eds.), The Oxford Handbook of Ethics of AI. Oxford University Press. pp. 307-325.
  10. ¿Los hominoides o androides destruirán la tierra? — Una revisión de ‘Cómo Crear una Mente’ (How to Create a Mind) por Ray Kurzweil (2012) (revisión revisada 2019).Michael Richard Starks - 2019 - In Delirios Utópicos Suicidas en el Siglo 21 La filosofía, la naturaleza humana y el colapso de la civilización Artículos y reseñas 2006-2019 4a Edición. Las Vegas, NV USA: Reality Press. pp. 250-262.
    Hace algunos años, Llegué al punto en el que normalmente puedo decir del título de un libro, o al menos de los títulos de los capítulos, qué tipos de errores filosóficos se harán y con qué frecuencia. En el caso de trabajos nominalmente científicos, estos pueden estar en gran parte restringidos a ciertos capítulos que enceran filosóficos o tratan de sacar conclusiones generales sobre el significado o significado a largo plazo de la obra. Normalmente, sin embargo, las cuestiones científicas de (...)
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  11. A Case for Machine Ethics in Modeling Human-Level Intelligent Agents.Robert James M. Boyles - 2018 - Kritike 12 (1):182–200.
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and decision-making. (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  12. How Philosophy of Mind Can Shape the Future.Susan Schneider & Pete Mandik - 2018 - In Amy Kind (ed.), Philosophy of Mind in the Twentieth and Twenty-first Centuries. New York, NY, USA: pp. 303-319.
  13. How Feasible is the Rapid Development of Artificial Superintelligence?Kaj Sotala - 2017 - Physica Scripta 11 (92).
    What kinds of fundamental limits are there in how capable artificial intelligence (AI) systems might become? Two questions in particular are of interest: (1) How much more capable could AI become relative to humans, and (2) how easily could superhuman capability be acquired? To answer these questions, we will consider the literature on human expertise and intelligence, discuss its relevance for AI, and consider how AI could improve on humans in two major aspects of thought and expertise, namely simulation and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.Kaj Sotala & Lukas Gloor - 2017 - Informatica: An International Journal of Computing and Informatics 41 (4):389-400.
    Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks) , where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as superintelligent AI both contributes to (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  15. Will Hominoids or Androids Destroy the Earth? —A Review of How to Create a Mind by Ray Kurzweil (2012).Michael Starks - 2017 - In Suicidal Utopian Delusions in the 21st Century 4th ed (2019). Henderson, NV USA: Michael Starks. pp. 675.
    Some years ago I reached the point where I can usually tell from the title of a book, or at least from the chapter titles, what kinds of philosophical mistakes will be made and how frequently. In the case of nominally scientific works these may be largely restricted to certain chapters which wax philosophical or try to draw general conclusions about the meaning or long term significance of the work. Normally however the scientific matters of fact are generously interlarded with (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  16. New Developments in the Philosophy of AI.Vincent Müller - 2016 - In Fundamental Issues of Artificial Intelligence. Springer.
    The philosophy of AI has seen some changes, in particular: 1) AI moves away from cognitive science, and 2) the long term risks of AI now appear to be a worthy concern. In this context, the classical central concerns – such as the relation of cognition and computation, embodiment, intelligence & rationality, and information – will regain urgency.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  17. Risks of Artificial Intelligence.Vincent C. Müller (ed.) - 2016 - CRC Press - Chapman & Hall.
    Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated to (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Editorial: Risks of Artificial Intelligence.Vincent C. Müller - 2016 - In Risks of artificial intelligence. CRC Press - Chapman & Hall. pp. 1-8.
    If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – and critically (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  19. Future Progress in Artificial Intelligence: A Survey of Expert Opinion.Vincent C. Müller & Nick Bostrom - 2016 - In Vincent Müller (ed.), Fundamental Issues of Artificial Intelligence. Springer. pp. 553-571.
    There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. We thus (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  20. Responses to Catastrophic AGI Risk: A Survey.Kaj Sotala & Roman V. Yampolskiy - 2015 - Physica Scripta 90.
    Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   8 citations  
  21. Nick Bostrom: Superintelligence: Paths, Dangers, Strategies: Oxford University Press, Oxford, 2014, Xvi+328, £18.99, ISBN: 978-0-19-967811-2. [REVIEW]Paul D. Thorn - 2015 - Minds and Machines 25 (3):285-289.
  22. Risks of Artificial General Intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).
    Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 - - (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  23. Editorial: Risks of General Artificial Intelligence.Vincent C. Müller - 2014 - Journal of Experimental and Theoretical Artificial Intelligence 26 (3):297-301.
    This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, what (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  24. Future Progress in Artificial Intelligence: A Poll Among Experts.Vincent C. Müller & Nick Bostrom - 2014 - AI Matters 1 (1):9-11.
    [This is the short version of: Müller, Vincent C. and Bostrom, Nick (forthcoming 2016), ‘Future progress in artificial intelligence: A survey of expert opinion’, in Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence (Synthese Library 377; Berlin: Springer).] - - - In some quarters, there is intense concern about high–level machine intelligence and superintelligent AI coming up in a few dec- ades, bringing with it significant risks for human- ity; in other quarters, these issues are ignored or considered science (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  25. Pervasion of What? Techno–Human Ecologies and Their Ubiquitous Spirits.Mark Coeckelbergh - 2013 - AI and Society 28 (1):55-63.
    Are the robots coming? Is the singularity near? Will we be dominated by technology? The usual response to ethical issues raised by pervasive and ubiquitous technologies assumes a philosophical anthropology centered on existential autonomy and agency, a dualistic ontology separating humans from technology and the natural from the artificial, and a post-monotheistic dualist and creational spirituality. This paper explores an alternative, less modern vision of the “technological” future based on different assumptions: a “deep relational” view of human being and self, (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  26. Philosophy and Theory of Artificial Intelligence.Vincent C. Müller (ed.) - 2013 - Springer.
    [Müller, Vincent C. (ed.), (2013), Philosophy and theory of artificial intelligence (SAPERE, 5; Berlin: Springer). 429 pp. ] --- Can we make machines that think and act like humans or other natural intelligent agents? The answer to this question depends on how we see ourselves and how we see the machines in question. Classical AI and cognitive science had claimed that cognition is computation, and can thus be reproduced on other computing machines, possibly surpassing the abilities of human intelligence. This (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  27. Safety Engineering for Artificial General Intelligence.Roman Yampolskiy & Joshua Fox - 2013 - Topoi 32 (2):217-226.
    Machine ethics and robot rights are quickly becoming hot topics in artificial intelligence and robotics communities. We will argue that attempts to attribute moral agency and assign rights to all intelligent machines are misguided, whether applied to infrahuman or superhuman AIs, as are proposals to limit the negative effects of AIs by constraining their behavior. As an alternative, we propose a new science of safety engineering for intelligent artificial agents based on maximizing for what humans value. In particular, we challenge (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  28. Design and the Singularity: The Philosophers Stone of AI?Igor Aleksander - 2012 - Journal of Consciousness Studies 19 (7-8):7-8.
    Much discussion on the singularity is based on the assumption that the design ability of a human can be transferred into an AI system, then rendered autonomous and self-improving. I argue here that this cannot be foreseen from the current state of the art of automatic or evolutionary design. Assuming that this will happen 'some day' is a doubtful step andmay be in the class of 'searching for the Philosopher's Stone'.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  29. Thinking Inside the Box: Controlling and Using an Oracle AI.Stuart Armstrong, Anders Sandberg & Nick Bostrom - 2012 - Minds and Machines 22 (4):299-324.
    There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  30. Introduction to JCS Singularity Edition.Uziel Awret - 2012 - Journal of Consciousness Studies 19 (1-2):7-15.
    This is the editors introduction to the double 2012 JCS edition on the Singularity.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  31. Introduction to Singularity Edition of JCS.Uziel Awret - 2012 - Journal of Consciousness Studies 19 (1-2):7-15.
    This special interactive interdisciplinary issue of JCS on the singularity and the future relationship of humanity and AI is the first of two issues centered on David Chalmers’ 2010 JCS article ‘The Singularity, a Philosophical Analysis’. These issues include more than 20 solicited commentaries to which Chalmers responds. To quote Chalmers: -/- "One might think that the singularity would be of great interest to Academic philosophers, cognitive scientists, and artificial intelligence researchers. In practice, this has not been the case. Good (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32. Belief in the Singularity is Logically Brittle.Selmer Bringsjord - 2012 - Journal of Consciousness Studies 19 (7-8):14.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  33. Terrible Angels.Damien Broderick - 2012 - Journal of Consciousness Studies 19 (1-2):20-41.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  34. The Singularity: A Reply to Commentators.David J. Chalmers - 2012 - Journal of Consciousness Studies (7-8):141-167.
    I would like to thank the authors of the 26 contributions to this symposium on my article “The Singularity: A Philosophical Analysis”. I learned a great deal from the reading their commentaries. Some of the commentaries engaged my article in detail, while others developed ideas about the singularity in other directions. In this reply I will concentrate mainly on those in the first group, with occasional comments on those in the second. A singularity (or an intelligence explosion) is a rapid (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  35. The Singularity and the Rapture: Transhumanist and Popular Christian Views of the Future.Ronald Cole-Turner - 2012 - Zygon 47 (4):777-796.
    Religious views of the future often include detailed expectations of profound changes to nature and humanity. Popular American evangelical Christianity, especially writers like Hal Lindsey, Rick Warren, or Rob Bell, offer extended accounts that provide insight into the views of the future held by many people. In the case of Lindsey, detailed descriptions of future events are provided, along with the claim that forecasted events will occur within a generation. These views are summarized and compared to the secular idea of (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  36. The Mystery of David Chalmers.Daniel Dennett - 2012 - Journal of Consciousness Studies 19 (1-2):1-2.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  37. A History of First Step Fallacies.Hubert L. Dreyfus - 2012 - Minds and Machines 22 (2):87-99.
    In the 1960s, without realizing it, AI researchers were hard at work finding the features, rules, and representations needed for turning rationalist philosophy into a research program, and by so doing AI researchers condemned their enterprise to failure. About the same time, a logician, Yehoshua Bar-Hillel, pointed out that AI optimism was based on what he called the “first step fallacy”. First step thinking has the idea of a successful last step built in. Limited early success, however, is not a (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  38. Should Humanity Build a Global AI Nanny to Delay the Singularity Until It's Better Understood?Ben Goertzel - 2012 - Journal of Consciousness Studies 19 (1-2):96.
    Chalmers suggests that, if a Singularity fails to occur in the next few centuries, the most likely reason will be 'motivational defeaters' i.e. at some point humanity or human-level AI may abandon the effort to create dramatically superhuman artificial general intelligence. Here I explore one plausible way in which that might happen: the deliberate human creation of an 'AI Nanny' with mildly superhuman intelligence and surveillance powers, designed either to forestall Singularity eternally, or to delay the Singularity until humanity more (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  39. The Singularity: Commentary on David Chalmers.Susan Greenfield - 2012 - Journal of Consciousness Studies 19 (1-2):1-2.
    The concept of a 'Singularity' is particularly intriguing as it is draws not just on philosophical but also neuroscientific issues. As a neuroscientist, perhaps my best contribution here therefore, would be to provide some reality checks against the elegant and challenging philosophical arguments set out by Chalmers. Aconvenient framework for addressing the points he raises will be to give my personal scientific take on the three basic questions summarised in the Conclusions section.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Memo From the Singularity Summit.Wendy M. Grossman - 2012 - The Philosophers' Magazine 56 (56):127-128.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  41. A Brain in a Vat Cannot Break Out: Why the Singularity Must Be Extended, Embedded and Embodied.Francis Heylighen & Center Leo Apostel Ecco - 2012 - Journal of Consciousness Studies 19 (1-2):126-142.
    The present paper criticizes Chalmers's discussion of the Singularity, viewed as the emergence of a superhuman intelligence via the self-amplifying development of artificial intelligence. The situated and embodied view of cognition rejects the notion that intelligence could arise in a closed 'brain-in-a-vat' system, because intelligence is rooted in a high-bandwidth, sensory-motor interaction with the outside world. Instead, it is proposed that superhuman intelligence can emerge only in a distributed fashion, in the form of a self-organizing network of humans, computers, and (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark   3 citations  
  42. Can Intelligence Explode?Marcus Hutter - 2012 - Journal of Consciousness Studies 19 (1-2):143-166.
    The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. The most popular scenario is the creation of super-intelligent algorithms that recursively create ever higher intelligences. It took many decades for these ideas to spread from science fiction to popular science magazines and finally to attract the attention of serious philosophers. David Chalmers' (JCS 2010) article is the first comprehensive philosophical analysis of the singularity in a respected philosophy journal. The motivation of my article is to (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  43. Science Versus Philosophy in the Singularity.Ray Kurzweil - 2012 - Journal of Consciousness Studies 19 (7-8):7-8.
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  44. A Response To The Singularity.Pamela Mccorduck - 2012 - Journal of Consciousness Studies 19 (7-8):54-56.
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  45. Response to The Singularity by David Chalmers.Drew McDermott - 2012 - Journal of Consciousness Studies 19 (1-2):1-2.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  46. Introduction: Philosophy and Theory of Artificial Intelligence.Vincent C. Müller - 2012 - Minds and Machines 22 (2):67-69.
    The theory and philosophy of artificial intelligence has come to a crucial point where the agenda for the forthcoming years is in the air. This special volume of Minds and Machines presents leading invited papers from a conference on the “Philosophy and Theory of Artificial Intelligence” that was held in October 2011 in Thessaloniki. Artificial Intelligence is perhaps unique among engineering subjects in that it has raised very basic questions about the nature of computing, perception, reasoning, learning, language, action, interaction, (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  47. Theory and Philosophy of AI (Minds and Machines, 22/2 - Special Volume).Vincent C. Müller (ed.) - 2012 - Springer.
    Invited papers from PT-AI 2011. - Vincent C. Müller: Introduction: Theory and Philosophy of Artificial Intelligence - Nick Bostrom: The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents - Hubert L. Dreyfus: A History of First Step Fallacies - Antoni Gomila, David Travieso and Lorena Lobo: Wherein is Human Cognition Systematic - J. Kevin O'Regan: How to Build a Robot that Is Conscious and Feels - Oron Shagrir: Computation, Implementation, Cognition.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  48. More Splodge Than Singularity?Chris Nunn - 2012 - Journal of Consciousness Studies 19 (7-8):7-8.
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  49. The Singularity Wager A Response to David Chalmers.Arkady Plotnitsky - 2012 - Journal of Consciousness Studies 19 (7-8):7-8.
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  50. Singularity and Inevitable Doom.Jesse Prinz - 2012 - Journal of Consciousness Studies 19 (7-8):7-8.
    Chalmers has articulated a compellingly simple argument for inevitability of the singularity—an explosion of increasingly intelligent machines, eventuating in super forms of intelligence. Chalmers then goes on to explore the implications of this outcome, and suggests ways in which we might prepare for the eventuality. I think Chalmers' argument proves both too much and too little. If the reasoning were right, it would follow inductively that the singularity already exists, in which case Chalmers would have proven more than he set (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  51. Nothing in this category. Everyone can categorize entries. Please help if you have the expertise.