Skip to main content
Log in

Ethical robots: the future can heed us

  • Original Article
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

Bill Joy’s deep pessimism is now famous. “Why the Future Doesn’t Need Us,” his defense of that pessimism, has been read by, it seems, everyone—and many of these readers, apparently, have been converted to the dark side, or rather more accurately, to the future-is-dark side. Fortunately (for us; unfortunately for Joy), the defense, at least the part of it that pertains to AI and robotics, fails. Ours may be a dark future, but we cannot know that on the basis of Joy’s reasoning. On the other hand, we ought to fear a good deal more than fear itself: we ought to fear not robots, but what some of us may do with robots.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. The paper originally appeared in Wired as (Joy 2000), and is available online: http://www.wired.com/wired/archive/8.04/joy.html. I quote in this paper from the online version, and therefore don’t use page numbers. The quotes are of course instantly findable with search over the online version.

  2. The presentation can be found without videos at http://www.kryten.mm.rpi.edu/PRES/CAPOSU0805/sb_robotsfreedom.pdf. Those able to view keynote, which has the videos of PERI in action embedded, can go to http://www.kryten.mm.rpi.edu/PRES/CAPOSU0805/sb_robotsfreedom.key.tar.gz. A full account of PERI and his exploits, which have not until recently had anything to do with autonomy (PERI has been built to match human intelligence in various domains; see e.g., Bringsjord and Schimanski 2003, 2004) can be found at http://www.cogsci.rpi.edu/research/rair/pai.

  3. This is as good a place as any to point out that, as the parentheticals associated with a number of the propositions on the list just given indicate, by the lights of some computationalists we are not pure software, but are embodied creatures. Joy and Moravec (and Hillis) assume that human persons are in the end software that can be attached to this or that body. That seems like a pretty big assumption.

  4. The most recent one appeared in Theoretical Computer Science (Bringsjord and Arkoudas 2004). For a formal list that was up-to-date as of 2003, and reached back to my What Robots Can and Cannot Be (1992), see my Superminds (2003).

  5. However, it does occur to me that it would perhaps be nice if a new argument against computationalism could be introduced in the present paper. Accordingly, here is one such argument, one that happens to be in line with the themes we are reflecting upon herein: Argument No. 4. 1. If computationalism is true, then concerted, global efforts undertaken by the world’s best relevant scientists and engineers to build computing machines with the cognitive power of human beings will succeed after n years of effort—the “clock” having been started in 1950. 2. Concerted, global efforts undertaken by the world’s best relevant scientists and engineers to build computing machines with the cognitive power of human beings have not succeeded after n years of effort (the clock). 3. Computationalism is false. Obviously, my case against Joy hinges not a bit on this argument. But I do think this argument should give pause to today’s computationalists. I have made it a point to ask a number of such strong “believers” how many years of failure would suffice to throw the truth of computationalism into doubt in their minds—and have never received back a number. But clearly, there must exist some n for which Argument No. 4 becomes sound. It seems to me that 50 is large enough, especially given that we have not produced a machine able to converse at the level of a sharp toddler.

  6. Let me point out here that it is entirely possible to do some first-rate thinking predicated on the supposition that human-level robots will eventually arrive. At the conference where I presented the keynote lecture centered around an ancestor of the present paper, such thinking was carried out by Torrance (2005) and Moor (2005).

  7. Any kind of reassurance would require that that which it feels like to be me had been reduced to some kind of third-person specification—which many have said is impossible. I have alluded above to the fact that today’s smartest machines cannot verbally out-duel a sharp toddler. But at least we do have computers that can understand some language, and we continue to press on. But we are really and truly nowhere in an attempt to understand consciousness in machine terms.

  8. Of course, some philosophers (e.g., Parfit 1986) have championed views of personal identity that seem to entail the real possibility of such downloading. But this is quite beside the point on the table, which is whether you would, in my thought-experiment, take the plunge. It is easy enough to describe thought-experiments in which even conservative folks would take the plunge. For example, if you knew that you were going to die in one hour, because an atom bomb is going to be detonated directly below your feet, you might well, out of desperation, give the downloading a shot. But this is a different thought-experiment.

References

  • Arkoudas K, Bringsjord S (2005) Toward ethical robots via mechanized deontic logic. In: Technical report—machine ethics: papers from the AAAI fall symposium; FS–05–06, American Association of Artificial Intelligence, Menlo Park, CA, pp 24–29

  • Barr A (1983) Artificial intelligence: cognition as computation. In: Machlup F (eds) The study of information: interdisciplinary messages, Wiley-Interscience, New York, NY, pp 237–262

    Google Scholar 

  • Boolos GS, Jeffrey RC (1989) Computability and logic. Cambridge University Press, Cambridge, UK

    MATH  Google Scholar 

  • Bringsjord S (1992) What robots can and can’t be. Kluwer, Dordrecht, The Netherlands

    MATH  Google Scholar 

  • Bringsjord S (2000) A contrarian future for minds and machines Chronicle of higher education p B5. Reprinted in The Education Digest 66(6):31–33

    Google Scholar 

  • Bringsjord S, Arkoudas K (2004) The modal argument for hypercomputing minds. Theor Comput Sci 317:167–190

    Article  MATH  MathSciNet  Google Scholar 

  • Bringsjord S, Schimanski B (2003) What is artificial intelligence? psychometric AI as an answer. In: Proceedings of the 18th international joint conference on artificial intelligence (IJCAI–03), San Francisco, CA, pp 887–893

  • Bringsjord S, Schimanski B (2004) Pulling it all together via psychometric AI. In: Proceedings of the 2004 fall symposium: achieving human-level intelligence through integrated systems and Research, Menlo Park, CA, pp 9–16

  • Bringsjord S, Zenzen M (2003) Superminds: people harness hypercomputation, and more. Kluwer Academic, Dordrecht, The Netherlands

    MATH  Google Scholar 

  • Dietrich E (1990) Computationalism. Soc Epistemology 4(2):135–154

    Article  Google Scholar 

  • Fetzer J (1994) Mental algorithms: are minds computational systems? Pragmatics Cogn 2(1):1–29

    Google Scholar 

  • HarnadS (1991) Other bodies, other minds: a machine incarnation of an old philosophical problem. Minds Mach 1(1):43–54

    Google Scholar 

  • Haugeland J (1985) Artificial intelligence: the very idea. MIT Press, Cambridge, MA

    Google Scholar 

  • HofstadterD (1985) Waking up from the Boolean dream. In: metamagical themas: questing for the essence of mind and pattern, Bantam, New York, NY, pp 631–665

  • Johnson-Laird P (1988) The computer and the mind. Harvard University Press, Cambridge, MA

    Google Scholar 

  • JoyW (2000) Why the future doesn’t need us. Wired 8(4)

  • Kurzweil R (2000) The age of spiritual machines: when computers exceed human intelligence. Penguin USA, New York, NY

    Google Scholar 

  • McCarthy J (2000) Free will–even for robots. J Experimental Theor Artif Intell 12(3):341–352

    Article  MATH  MathSciNet  Google Scholar 

  • Moor J (2005) The nature and importance of machine ethics. In: technical report—machine ethics: papers from the AAAI fall symposium; FS–05–06, American Association of Artificial Intelligence, Menlo Park, CA

  • Moravec H (1999) Robot: mere machine to transcendant mind. Oxford University Press, Oxford, UK

    Google Scholar 

  • Neumann JV (1966) Theory of self-reproducing automata. Illinois University Press, IL

    Google Scholar 

  • Newell A (1980) Physical symbol systems. Cogn Sci 4:135–183

    Article  Google Scholar 

  • Parfit D (1986) Reasons and persons. Oxford University Press, Oxford, UK

    Google Scholar 

  • Peters RS (ed) (1962) Body, man, and citizen: selections from hobbes’ writing. Collier, New York, NY

  • Searle J (1980) Minds, brains, and programs. Behav and Brain Sci 3:417–424

    Article  Google Scholar 

  • Simon H (1980) Cognitive science: the newest science of the artificial. Cogn Sci 4:33–56

    Article  Google Scholar 

  • Simon H (1981) Study of human intelligence by creating artificial intelligence. Am Sci 69(3):300–309

    Google Scholar 

  • Torrance S (2005) A robust view of machine ethics. In: Technical report—machine ethics: papers from the AAAI fall symposium; FS–05–06, American Association of Artificial Intelligence, Menlo Park, CA

  • Turing A (1950) Computing machinery and intelligence. Mind LIX 236:433–460

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

Thanks are due to Steve Torrance, Michael Anderson, and Jim Moor, for comments and suggestions offered after the keynote presentation that was based on an ancestor of this paper (at AAAI’s 2005 fall symposium on machine ethics). Thanks are also due to Konstantine Arkoudas, Paul Bello, and Yingrui Yang for discussions related to the issues treated herein. Special thanks are due to Bettina Schimanski for her robotics work on PERI, and for helping to concretize my widening investigation of robot free will by tinkering with real robots. Finally, I am grateful to two anonymous referees for comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Selmer Bringsjord.

Appendix

Appendix

The full quote of the Unabomber’s fallacious argument, which appears also in Joy’s piece:

First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case, presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.

If the machines are permitted to make all their own decisions, we cannot make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People would not be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.

On the other hand, it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite—just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity. If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct, leaving the world to the elite. Or, if the elite consists of soft-hearted liberals, they may decide to play the role of good shepherds to the rest of the human race. They will see to it that everyone’s physical needs are satisfied, that all children are raised under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes “treatment” to cure his “problem.” Of course, life will be so purpose-less that people will have to be biologically or psychologically engineered either to remove their need for the power process or make them “sublimate” their drive for power into some harmless hobby. These engineered human beings may be happy in such a society, but they will most certainly not be free. They will have been reduced to the status of domestic animals.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Bringsjord, S. Ethical robots: the future can heed us. AI & Soc 22, 539–550 (2008). https://doi.org/10.1007/s00146-007-0090-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-007-0090-9

Keywords

Navigation