This is the first work to apply complex systems science to the psychological interplay of order and chaos. The author draws on thought from a wide range of disciplines-both conventional and unorthodox-to address such questions as the nature of consciousness, the relation between mind and reality, and the justification of belief systems. The material should provoke thought among systems scientists, theoretical psychologists, artificial intelligence researchers, and philosophers.
“Only a small community has concentratedon general intelligence. No one has tried to make a thinking machine... The bottom line is that we really haven’t progressed too far toward a truly intelligent machine. We have collections of dumb specialists in small domains; the true majesty of general intelligence still awaits our attack.... We have got to get back to the deepest questions of AI and general intelligence... ” –MarvinMinsky as interviewed in Hal’s Legacy, edited by David Stork, 2000. Our goal (...) in creating this edited volume has been to?ll an apparent gap in the scienti?c literature, by providing a coherent presentation of a body of contemporary research that, in spite of its integral importance, has hitherto kept a very low pro?le within the scienti?c and intellectual community. This body of work has not been given a name before; in this book we christen it “Arti?cial General Intelligence”. What distinguishes AGI work from run-of-the-mill “arti?cial intelligence” research is that it is explicitly focused on engineering general intelligence in the short term. We have been active researchers in the AGI?eld for many years, and it has been a pleasure to gather together papers from our colleagues working on related ideas from their own perspectives. In the Introduction we give a conceptual overview of the AGI?eld, and also summarize and interrelate the key ideas of the papers in the subsequent chapters. (shrink)
Chalmers suggests that, if a Singularity fails to occur in the next few centuries, the most likely reason will be 'motivational defeaters' i.e. at some point humanity or human-level AI may abandon the effort to create dramatically superhuman artificial general intelligence. Here I explore one plausible way in which that might happen: the deliberate human creation of an 'AI Nanny' with mildly superhuman intelligence and surveillance powers, designed either to forestall Singularity eternally, or to delay the Singularity until humanity more (...) fully understands how to execute a Singularity in a positive way. It is suggested that as technology progresses, humanity may find the creation of an AI Nanny desirable as a means of protecting against the destructive potential of various advanced technologies such as AI, nanotechnology and synthetic biology. (shrink)
While it seems unlikely that any method of guaranteeing human-friendliness on the part of advanced Artificial General Intelligence systems will be possible, this doesn’t mean the only alternatives are throttling AGI development to safeguard humanity, or plunging recklessly into the complete unknown. Without denying the presence of a certain irreducible uncertainty in such matters, it is still sensible to explore ways of biasing the odds in a favorable way, such that newly created AI systems are significantly more likely than not (...) to be Friendly. Several potential methods of effecting such biasing are explored here, with a particular but non-exclusive focus on those that are relevant to open-source AGI projects, and with illustrative examples drawn from the OpenCog open-source AGI project. Issues regarding the relative safety of open versus closed approaches to AGI are discussed and then nine techniques for biasing AGIs in favor of Friendliness are presented: 1. Engineer the capability to acquire integrated ethical knowledge.2. Provide rich ethical interaction and instruction, respecting developmental stages.3. Develop stable, hierarchical goal systems.4. Ensure that the early stages of recursive self-improvement occur relatively slowly and with rich human involvement.5. Tightly link AGI with the Global Brain.6. Foster deep, consensus-building interactions between divergent viewpoints.7. Create a mutually supportive community of AGIs.8. Encourage measured co-advancement of AGI software and AGI ethics theory.9. Develop advanced AGI sooner not later. In conclusion, and related to the final point, we advise the serious co-evolution of functional AGI systems and AGI-related ethical theory as soon as possible, before we have so much technical infrastructure that parties relatively unconcerned with ethics are able to rush ahead with brute force approaches to AGI development. (shrink)
Oxford philosopher Nick Bostrom; in his recent and celebrated book Superintelligence; argues that advanced AI poses a potentially major existential risk to humanity; and that advanced AI development should be heavily regulated and perhaps even restricted to a small set of government-approved researchers. Bostrom’s ideas and arguments are reviewed and explored in detail; and compared with the thinking of three other current thinkers on the nature and implications of AI: Eliezer Yudkowsky of the Machine Intelligence Research Institute ; and David (...) Weinbaum and Viktoras Veitas of the Global Brain Institute. Relevant portions of Yudkowsky’s book Rationality: From AI to Zombies are briefly reviewed; and it is found that nearly all the core ideas of Bostrom’s work appeared previously or concurrently in Yudkowsky’s thinking. However; Yudkowsky often presents these shared ideas in a more plain-spoken and extreme form; making clearer the essence of what is being claimed. For instance; the elitist strain of thinking that one sees in the background in Bostrom is plainly and openly articulated in Yudkowsky; with many of the same practical conclusions. Bostrom and Yudkowsky view intelligent systems through the lens of reinforcement learning – they view them as “reward-maximizers” and worry about what happens when a very powerful and intelligent reward-maximizer is paired with a goal system that gives rewards for achieving foolish goals like tiling the universe with paperclips. Weinbaum and Veitas’s recent paper “Open-Ended Intelligence” presents a starkly alternative perspective on intelligence; viewing it as centered not on reward maximization; but rather on complex self-organization and self-transcending development that occurs in close coupling with a complex environment that is also ongoingly self-organizing; in only partially knowable ways. It is concluded that Bostrom and Yudkowsky’s arguments for existential risk have some logical foundation; but are often presented in an exaggerated way. For instance; formal arguments whose implication is that the “worst case scenarios” for advanced AI development are extremely dire; are often informally discussed as if they demonstrated the likelihood; rather than just the possibility; of highly negative outcomes. And potential dangers of reward-maximizing AI are taken as problems with AI in general; rather than just as problems of the reward-maximization paradigm as an approach to building superintelligence. If one views past; current; and future intelligence as “open-ended;” in the vernacular of Weaver and Veitas; the potential dangers no longer appear to loom so large; and one sees a future that is wide-open; complex and uncertain; just as it has always been. (shrink)
Two theses are proposed; regarding the future evolution of the value systems of advanced AGI systems. The Value Learning Thesis is a semi-formalized version of the idea that; if an AGI system is taught human values in an interactive and experiential way as its intelligence increases toward human level; it will likely adopt these human values in a genuine way. The Value Evolution Thesis is a semi-formalized version of the idea that if an AGI system begins with human-like values; and (...) then iteratively modifies itself; it will end up in roughly the same future states as a population of human beings engaged with progressively increasing their own intelligence. Taken together; these theses suggest a worldview in which raising young AGIs to have human-like values is a sensible thing to do; and likely to produce a future that is generally desirable in a human sense. While these two theses are far from definitively proven; I argue that they are more solid and more relevant to the actual future of AGI than Bostrom’s “Instrumental Convergence Thesis” and “Orthogonality Thesis” which are core to the basis of his argument for fearing ongoing AGI development and placing AGI R&D under strict governmental control. In the context of fleshing out this argument; previous publications and discussions by Richard Loosemore and Kaj Sotala are discussed in some detail. (shrink)
What will be the next huge leap in humanity's progress? We cannot know for sure, but I am reasonably confident that it will involve the radical extension of technology into the domain of thought. Ray Kurzweil (2000, 2005) has eloquently summarized the arguments in favor of this position.
This chapter discusses nine ways to bias open‐source artificial general intelligence (AGI) toward friendliness. There is no way to guarantee that advanced AGI, once created and released into the world, will behave according to human ethical standards. The primary objective of the chapter is to suggest some potential ways to do so. First it discusses an engineer the capability to acquire integrated ethical knowledge, and provides rich ethical interaction and instruction, respecting developmental stages. The chapter creates stable, hierarchy‐dominated goal systems, (...) and ensures that the early stages of recursive self‐improvement occur relatively slowly and with rich human involvement. It tightly links AGI with the Global Brain, and focuses on foster deep, consensus‐building interactions and commensurability between divergent viewpoints. The chapter creates a mutually supportive community of AGIs, and encourages measured co‐advancement of AGI software and AGI ethics theory. Finally it develops advanced AGI sooner. (shrink)
Leslie Kean’s Surviving Death is a wonderfully readable, carefully constructed summary of the evidence for the existence of what is colloquially called an “afterlife.” That is, she considers evidence for the hypothesis that individual human minds and personalities possess an existence going beyond their attachment to any particular body – so that, for instance, an individual with a certain name and certain traits may sometimes continue to perceive and act, even when the body typically associated with that individual is dead (...) and gone. Most of the book comprises moderately detailed descriptions of specific cases, involving specific people, which indicate the existence of some sort of “afterlife” for individual human minds, or potentially give some information regarding the nature of this afterlife. Kean considers a gamut of phenomena such as past-life memories, near-death experiences, mediumistic trances, poltergeists, and so forth. However, she also makes a significant effort to draw general conclusions, lessons and hypotheses from the totality of these cases, while maintaining respect for the confusing and in many ways still mysterious nature of the phenomena under discussion. Each of the topics considered in the book has been reviewed and analyzed in more depth elsewhere. What Kean does, however, is provide a clear, evocative and rational survey of the many types of evidence that are directly relevant to the possibility and nature of an afterlife for individual human minds. (shrink)
Ideas from random graph theory are used to give an heuristic argument that associative memory structure depends discontinuously on pattern recognition ability. This argument suggests that there may be a certain minimal size for intelligent systems.
It has been proposed that natural selection occurs on a hierarchy of levels, of which the organismic level is neither the top nor the bottom. This hypothesis leads to the following practical problem: in general, how does one tell if a given phenomenon is a result of selection on level X or level Y. How does one tell what the units of selection actually are?It is convenient to assume that a unit of selection may be defined as a type of (...) entity for which there exists, among all entities on the same level as that entity, an additive component of variance for some specific component F of fitness which does not appear as an additive component of variance in any decomposition of this F among entities at any lower level. But such a definition implicitly assumes that if f(x, y) depends nonadditively on its arguments, there must be interaction between the quantities which x and y represent. This assumption is incorrect. And one cannot avoid this error by speaking of transformability to additivity instead of merely additivity. (shrink)
Akin''s determinism paradox involves a physical system that predicts its own behavior, and then spitefully defies it. Here this paradox is reformulated in purely computational language, in terms of virtual machines. The paradox is related to the theory of self-reproducing automata; and a mathematical conjecture is given which, if verified, would resolve the paradox.
0. 0 Psychology versus Complex Systems Science Over the last century, psychology has become much less of an art and much more of a science. Philosophical speculation is out; data collection is in. In many ways this has been a very positive trend. Cognitive science (Mandler, 1985) has given us scientific analyses of a variety of intelligent behaviors: short-term memory, language processing, vision processing, etc. And thanks to molecular psychology (Franklin, 1985), we now have a rudimentary understanding of the chemical (...) processes underlying personality and mental illness. However, there is a growing feeling-particularly among non-psychologists (see e. g. Sommerhoff, 1990) - that, with the new emphasis on data collection, something important has been lost. Very little attention is paid to the question of how it all fits together. The early psychologists, and the classical philosophers of mind, were concerned with the general nature of mentality as much as with the mechanisms underlying specific phenomena. But the new, scientific psychology has made disappointingly little progress toward the resolution of these more general questions. One way to deal with this complaint is to dismiss the questions themselves. After all, one might argue, a scientific psychology cannot be expected to deal with fuzzy philosophical questions that probably have little empirical signifi cance. It is interesting that behaviorists and cognitive scientists tend to be in agreement regarding the question of the overall structure of the mind. (shrink)