About this topic
Summary The technological singularity, or the intelligence explosion, is a hypothesized event that will follow the creation of machines whose intelligence is greater than that of humans.  The hypothesis is that such machines will be better than humans at designing machines, so that even more intelligent machines will follow, with a rapid spiral to superintelligence.
Key works The idea of an intelligence explosion is introduced in Good 1965.  The term "singularity" is introduced by Vinge 1993.  Philosophical analyses are given by Bostrom 1998 and Chalmers 2010.
Introductions Chalmers 2010
  Show all references
Related categories
Siblings:See also:
56 found
Search inside:
(import / add options)   Order:
1 — 50 / 56
  1. George Ainslie, Ryan T. McKay & Daniel C. Dennett (2009). Non-Instrumental Belief is Largely Founded on Singularity 1. Behavioral and Brain Sciences 32 (6):511.
    The radical evolutionary step that divides human decision-making from that of nonhumans is the ability to excite the reward process for its own sake, in imagination. Combined with hyperbolic over-valuation of the present, this ability is a potential threat to both the individual's long term survival and the natural selection of high intelligence. Human belief is intrinsically or under-founded, which may or may not be adaptive.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  2. Igor Aleksander (2012). Design and the Singularity: The Philosophers Stone of AI? Journal of Consciousness Studies 19 (7-8):7-8.
    Much discussion on the singularity is based on the assumption that the design ability of a human can be transferred into an AI system, then rendered autonomous and self-improving. I argue here that this cannot be foreseen from the current state of the art of automatic or evolutionary design. Assuming that this will happen 'some day' is a doubtful step andmay be in the class of 'searching for the Philosopher's Stone'.
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  3. Stuart Armstrong, Anders Sandberg & Nick Bostrom (2012). Thinking Inside the Box: Controlling and Using an Oracle AI. Minds and Machines 22 (4):299-324.
    There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    My bibliography   3 citations  
  4. Uziel Awret (2012). Introduction to Singularity Edition of JCS. Journal of Consciousness Studies 19 (1-2):7-15.
  5. Nick Bostrom, Ethical Issues in Advanced Artificial Intelligence.
    The ethical issues related to the possible future creation of machines with general intellectual capabilities far outstripping those of humans are quite distinct from any ethical problems arising in current automation and information systems. Such superintelligence would not be just another technological development; it would be the most important invention ever made, and would lead to explosive progress in all scientific and technological fields, as the superintelligence would conduct research with superhuman efficiency. To the extent that ethics is a cognitive (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography   4 citations  
  6. Nick Bostrom, When Machines Outsmart Humans.
    Artificial intelligence is a possibility that should not be ignored in any serious thinking about the future, and it raises many profound issues for ethics and public policy that philosophers ought to start thinking about. This article outlines the case for thinking that human-level machine intelligence might well appear within the next half century. It then explains four immediate consequences of such a development, and argues that machine intelligence would have a revolutionary impact on a wide range of the social, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography   1 citation  
  7. Nick Bostrom (2003). Taking Intelligent Machines Seriously: Reply to Critics. Futures 35 (8):901-906.
    In an earlier paper in this journal[1], I sought to defend the claims that (1) substantial probability should be assigned to the hypothesis that machines will outsmart humans within 50 years, (2) such an event would have immense ramifications for many important areas of human concern, and that consequently (3) serious attention should be given to this scenario. Here, I will address a number of points made by several commentators.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  8. Nick Bostrom (1998). How Long Before Superintelligence? International Journal of Futures Studies 2.
    _This paper outlines the case for believing that we will have superhuman artificial intelligence_ _within the first third of the next century. It looks at different estimates of the processing power of_ _the human brain; how long it will take until computer hardware achieve a similar performance;_ _ways of creating the software through bottom-up approaches like the one used by biological_ _brains; how difficult it will be for neuroscience figure out enough about how brains work to_ _make this approach work; (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography   4 citations  
  9. Selmer Bringsjord (2012). Belief in the Singularity is Logically Brittle. Journal of Consciousness Studies 19 (7):14.
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography   1 citation  
  10. Damien Broderick (2012). Terrible Angels. Journal of Consciousness Studies 19 (1-2):20-41.
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  11. David J. Chalmers (2012). The Singularity: A Reply to Commentators. Journal of Consciousness Studies (7-8):141-167.
    I would like to thank the authors of the 26 contributions to this symposium on my article “The Singularity: A Philosophical Analysis”. I learned a great deal from the reading their commentaries. Some of the commentaries engaged my article in detail, while others developed ideas about the singularity in other directions. In this reply I will concentrate mainly on those in the first group, with occasional comments on those in the second. A singularity (or an intelligence explosion) is a rapid (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography   3 citations  
  12. David J. Chalmers (2010). The Singularity: A Philosophical Analysis. Journal of Consciousness Studies 17 (9-10):9 - 10.
    What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to ever-greater levels of intelligence, as each generation of machines creates more intelligent machines in turn. This intelligence explosion is now often known as the “singularity”. The basic argument here was set out by the statistician I.J. Good in his 1965 article “Speculations Concerning the First Ultraintelligent Machine”: Let an ultraintelligent machine be defined as a machine that can far (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography   17 citations  
  13. Mark Coeckelbergh (2013). Pervasion of What? Techno–Human Ecologies and Their Ubiquitous Spirits. AI and Society 28 (1):55-63.
    Are the robots coming? Is the singularity near? Will we be dominated by technology? The usual response to ethical issues raised by pervasive and ubiquitous technologies assumes a philosophical anthropology centered on existential autonomy and agency, a dualistic ontology separating humans from technology and the natural from the artificial, and a post-monotheistic dualist and creational spirituality. This paper explores an alternative, less modern vision of the “technological” future based on different assumptions: a “deep relational” view of human being and self, (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    My bibliography  
  14. Ronald Cole-Turner (2012). The Singularity and the Rapture: Transhumanist and Popular Christian Views of the Future. Zygon 47 (4):777-796.
    Religious views of the future often include detailed expectations of profound changes to nature and humanity. Popular American evangelical Christianity, especially writers like Hal Lindsey, Rick Warren, or Rob Bell, offer extended accounts that provide insight into the views of the future held by many people. In the case of Lindsey, detailed descriptions of future events are provided, along with the claim that forecasted events will occur within a generation. These views are summarized and compared to the secular idea of (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography   3 citations  
  15. Daniel Dennett (2012). The Mystery of David Chalmers. Journal of Consciousness Studies 19 (1-2):1-2.
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography   1 citation  
  16. Eric Dietrich (2007). After the Humans Are Gone. Philosophy Now 61 (May/June):16-19.
    Recently, on the History Channel, artificial intelligence (AI) was singled out, with much wringing of hands, as one of the seven possible causes of the end of human life on Earth. I argue that the wringing of hands is quite inappropriate: the best thing that could happen to humans, and the rest of life of on planet Earth, would be for us to develop intelligent machines and then usher in our own extinction.
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  17. Robert M. Geraci (2010). The Popular Appeal of Apocalyptic Ai. Zygon 45 (4):1003-1020.
    The belief that computers will soon become transcendently intelligent and that human beings will “upload” their minds into machines has become ubiquitous in public discussions of robotics and artificial intelligence in Western cultures. Such beliefs are the result of pervasive Judaeo-Christian apocalyptic beliefs, and they have rapidly spread through modern pop and technological culture, including such varied and influential sources as Rolling Stone, the IEEE Spectrum, and official United States government reports. They have gained sufficient credibility to enable the construction (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    My bibliography   3 citations  
  18. Ben Goertzel (2012). Should Humanity Build a Global AI Nanny to Delay the Singularity Until It's Better Understood? Journal of Consciousness Studies 19 (1):96.
    Chalmers suggests that, if a Singularity fails to occur in the next few centuries, the most likely reason will be 'motivational defeaters' i.e. at some point humanity or human-level AI may abandon the effort to create dramatically superhuman artificial general intelligence. Here I explore one plausible way in which that might happen: the deliberate human creation of an 'AI Nanny' with mildly superhuman intelligence and surveillance powers, designed either to forestall Singularity eternally, or to delay the Singularity until humanity more (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography   1 citation  
  19. I. J. Good (1965). Speculations Concerning the First Ultraintelligent Machine. In F. Alt & M. Ruminoff (eds.), Advances in Computers, volume 6. Academic Press
    Remove from this list  
    Translate
     
     
    Export citation  
     
    My bibliography   6 citations  
  20. Susan Greenfield (2012). The Singularity: Commentary on David Chalmers. Journal of Consciousness Studies 19 (1-2):1-2.
    The concept of a 'Singularity' is particularly intriguing as it is draws not just on philosophical but also neuroscientific issues. As a neuroscientist, perhaps my best contribution here therefore, would be to provide some reality checks against the elegant and challenging philosophical arguments set out by Chalmers. Aconvenient framework for addressing the points he raises will be to give my personal scientific take on the three basic questions summarised in the Conclusions section.
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  21. Wendy M. Grossman (2012). Memo From the Singularity Summit. The Philosophers' Magazine 56 (56):127-128.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  22. John Storrs Hall (2007). Self-Improving AI: An Analysis. [REVIEW] Minds and Machines 17 (3):249-259.
    Self-improvement was one of the aspects of AI proposed for study in the 1956 Dartmouth conference. Turing proposed a “child machine” which could be taught in the human manner to attain adult human-level intelligence. In latter days, the contention that an AI system could be built to learn and improve itself indefinitely has acquired the label of the bootstrap fallacy. Attempts in AI to implement such a system have met with consistent failure for half a century. Technological optimists, however, have (...)
    Remove from this list   Direct download (9 more)  
     
    Export citation  
     
    My bibliography   2 citations  
  23. Robin Hanson, Is a Singularity Just Around the Corner?
    Economic growth is determined by the supply and demand of investment capital; technology determines the demand for capital, while human nature determines the supply. The supply curve has two distinct parts, giving the world economy two distinct modes. In the familiar slow growth mode, rates of return are limited by human discount rates. In the fast growth mode, investment is limited by the world's wealth. Historical trends suggest that we may transition to the fast mode in roughly another century and (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  24. Francis Heylighen & Center Leo Apostel Ecco (2012). A Brain in a Vat Cannot Break Out: Why the Singularity Must Be Extended, Embedded and Embodied. Journal of Consciousness Studies 19 (1-2):126-142.
    The present paper criticizes Chalmers's discussion of the Singularity, viewed as the emergence of a superhuman intelligence via the self-amplifying development of artificial intelligence. The situated and embodied view of cognition rejects the notion that intelligence could arise in a closed 'brain-in-a-vat' system, because intelligence is rooted in a high-bandwidth, sensory-motor interaction with the outside world. Instead, it is proposed that superhuman intelligence can emerge only in a distributed fashion, in the form of a self-organizing network of humans, computers, and (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  25. Marcus Hutter (2012). Can Intelligence Explode? Journal of Consciousness Studies 19 (1-2):143-166.
    The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. The most popular scenario is the creation of super-intelligent algorithms that recursively create ever higher intelligences. It took many decades for these ideas to spread from science fiction to popular science magazines and finally to attract the attention of serious philosophers. David Chalmers' (JCS 2010) article is the first comprehensive philosophical analysis of the singularity in a respected philosophy journal. The motivation of my article is to (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  26. Ray Kurzweil (2012). Science Versus Philosophy in the Singularity. Journal of Consciousness Studies 19 (7-8):7-8.
    Remove from this list  
     
    Export citation  
     
    My bibliography  
  27. Ray Kurzweil (2009). Superintelligence and Singularity. In Susan Schneider (ed.), Science Fiction and Philosophy: From Time Travel to Superintelligence. Wiley-Blackwell 201--24.
    Remove from this list  
     
    Export citation  
     
    My bibliography  
  28. Pamela Mccorduck (2012). A Response To The Singularity. Journal of Consciousness Studies 19 (7-8):54-56.
    Remove from this list  
     
    Export citation  
     
    My bibliography  
  29. Drew McDermott (2012). Response to The Singularity by David Chalmers. Journal of Consciousness Studies 19 (1-2):1-2.
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  30. Vincent C. Müller (2016). New Developments in the Philosophy of AI. In Fundamental Issues of Artificial Intelligence. Springer
    The philosophy of AI has seen some changes, in particular: 1) AI moves away from cognitive science, and 2) the long term risks of AI now appear to be a worthy concern. In this context, the classical central concerns – such as the relation of cognition and computation, embodiment, intelligence & rationality, and information – will regain urgency.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  31. Vincent C. Müller (ed.) (2016). Risks of Artificial Intelligence. CRC Press - Chapman & Hall.
    Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated to (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  32. Vincent C. Müller (2016). Editorial: Risks of Artificial Intelligence. In Risks of artificial intelligence. CRC Press - Chapman & Hall 1-8.
    If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – and critically (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  33. Vincent C. Müller (ed.) (2014). Risks of Artificial General Intelligence. Taylor & Francis (JETAI).
    Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 - - (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  34. Vincent C. Müller (2014). Editorial: Risks of General Artificial Intelligence. Journal of Experimental and Theoretical Artificial Intelligence 26 (3):297-301.
    This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  35. Vincent C. Müller (ed.) (2013). Philosophy and Theory of Artificial Intelligence. Springer.
    Can we make machines that think and act like humans or other natural intelligent agents? The answer to this question depends on how we see ourselves and how we see the machines in question. Classical AI and cognitive science had claimed that cognition is computation, and can thus be reproduced on other computing machines, possibly surpassing the abilities of human intelligence. This consensus has now come under threat and the agenda for the philosophy and theory of AI must be set (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  36. Vincent C. Müller (2012). Introduction: Philosophy and Theory of Artificial Intelligence. Minds and Machines 22 (2):67-69.
    The theory and philosophy of artificial intelligence has come to a crucial point where the agenda for the forthcoming years is in the air. This special volume of Minds and Machines presents leading invited papers from a conference on the “Philosophy and Theory of Artificial Intelligence” that was held in October 2011 in Thessaloniki. Artificial Intelligence is perhaps unique among engineering subjects in that it has raised very basic questions about the nature of computing, perception, (...)
    Remove from this list   Direct download (7 more)  
     
    Export citation  
     
    My bibliography  
  37. Vincent C. Müller (ed.) (2012). Theory and Philosophy of AI (Minds and Machines, 22/2 - Special Volume). Springer.
    Invited papers from PT-AI 2011. - Vincent C. Müller: Introduction: Theory and Philosophy of Artificial Intelligence - Nick Bostrom: The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents - Hubert L. Dreyfus: A History of First Step Fallacies - Antoni Gomila, David Travieso and Lorena Lobo: Wherein is Human Cognition Systematic - J. Kevin O'Regan: How to Build a Robot that Is Conscious and Feels - Oron Shagrir: Computation, Implementation, Cognition.
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  38. Vincent C. Müller & Nick Bostrom (2016). Future Progress in Artificial Intelligence: A Survey of Expert Opinion. In Fundamental Issues of Artificial Intelligence. Springer 553-571.
    There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. We thus (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  39. Vincent C. Müller & Nick Bostrom (2014). Future Progress in Artificial Intelligence: A Poll Among Experts. AI Matters 1 (1):9-11.
    [This is the short version of: Müller, Vincent C. and Bostrom, Nick (forthcoming 2016), ‘Future progress in artificial intelligence: A survey of expert opinion’, in Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence (Synthese Library 377; Berlin: Springer).] - - - In some quarters, there is intense concern about high–level machine intelligence and superintelligent AI coming up in a few dec- ades, bringing with it significant risks for human- ity; in other quarters, these issues are ignored or considered science (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  40. Chris Nunn (2012). More Splodge Than Singularity? Journal of Consciousness Studies 19 (7-8):7-8.
    Remove from this list  
     
    Export citation  
     
    My bibliography  
  41. Arkady Plotnitsky (2012). The Singularity Wager A Response to David Chalmers. Journal of Consciousness Studies 19 (7-8):7-8.
    Remove from this list  
     
    Export citation  
     
    My bibliography  
  42. Jesse Prinz (2012). Singularity and Inevitable Doom. Journal of Consciousness Studies 19 (7-8):7-8.
    Chalmers has articulated a compellingly simple argument for inevitability of the singularity—an explosion of increasingly intelligent machines, eventuating in super forms of intelligence. Chalmers then goes on to explore the implications of this outcome, and suggests ways in which we might prepare for the eventuality. I think Chalmers' argument proves both too much and too little. If the reasoning were right, it would follow inductively that the singularity already exists, in which case Chalmers would have proven more than he set (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  43. David Roden (2012). The Disconnection Thesis. In A. Eden, J. H. Søraker, E. Steinhart & A. H. Moore (eds.), The Singularity Hypothesis: A Scientific and Philosophical Assessment. Springer
    In his 1993 article ‘The Coming Technological Singularity: How to survive in the posthuman era’ the computer scientist Virnor Vinge speculated that developments in artificial intelligence might reach a point where improvements in machine intelligence result in smart AI’s producing ever-smarter AI’s. According to Vinge the ‘singularity’, as he called this threshold of recursive self-improvement, would be a ‘transcendental event’ transforming life on Earth in ways that unaugmented humans are not equipped to envisage. In this paper I argue Vinge’s idea (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  44. Arthur Schopenhauer (2009). Superintelligence and Singularity Ray Kurzweil. In Susan Schneider (ed.), Science Fiction and Philosophy: From Time Travel to Superintelligence. Wiley-Blackwell 60--201.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  45. Eric Steinhart (2012). The Singularity Beyond Philosophy of Mind. Journal of Consciousness Studies 19 (7-8):7-8.
    Thought about the singularity intersects the philosophy of mind in deep and important ways. However, thought about the singularity also intersects many other areas of philosophy, including the history of philosophy, metaphysics, the philosophy of science, and the philosophy of religion. I point to some of those intersections. Singularitarian thought suggests that many of the objects and processes that once lay in the domain of revealed religion now lie in the domain of pure computer science.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  46. Paul D. Thorn (2015). Nick Bostrom: Superintelligence: Paths, Dangers, Strategies. [REVIEW] Minds and Machines 25 (3):285-289.
  47. Frank Tipler (2012). Inevitable Existence and Inevitable Goodness of the Singularity. Journal of Consciousness Studies 19 (1-2):1-2.
    I show that the known fundamental laws of physics--quantum mechanics, general relativity, and the particle physics Standard Model -- imply that the Singularity will inevitably come to pass. Further, I show that there is an ethical system built into science and rationality itself -- thus the value-fact distinction is nonsense -- and this will preclude the AI's from destroying humanity even if they wished to do so. Finally, I show that the coming Singularity is good because only if it occurs (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  48. Vernor Vinge (1993). The Coming Technological Singularity. Whole Earth Review.
    Remove from this list  
    Translate
     
     
    Export citation  
     
    My bibliography   2 citations  
  49. Vernor Vinge (1993). The Coming Technological Singularity: How to Survive in the Post-Human Era. Whole Earth Review.
  50. James Williams (2011). Book: The Singularity Is Near-by Ray Kurzweil. Philosophy Now 86:43.
    Remove from this list  
     
    Export citation  
     
    My bibliography  
1 — 50 / 56