Skip to main content
Log in

Automatic decision-making and reliability in robotic systems: some implications in the case of robot weapons

  • 25TH ANNIVERSARY VOLUME A FAUSTIAN EXCHANGE: WHAT IS IT TO BE HUMAN IN THE ERA OF UBIQUITOUS TECHNOLOGY?
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

In this article, I shall examine some of the issues and questions involved in the technology of autonomous robots, a technology that has developed greatly and is advancing rapidly. I shall do so with reference to a particularly critical field: autonomous military robotic systems. In recent times, various issues concerning the ethical implications of these systems have been the object of increasing attention from roboticists, philosophers and legal experts. The purpose of this paper is not to deal with these issues, but to show how the autonomy of those robotic systems, by which I mean the full automation of their decision processes, raises difficulties and also paradoxes that are not easy to solve. This is especially so when considering the autonomy of those robotic systems in their decision processes alongside their reliability. Finally, I would like to show how difficult it is to respond to these difficulties and paradoxes by calling into play a strong formulation of the precautionary principle.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. Robot’s decision or choice processes are usually considered the main hallmark of its “intelligence”—a term that refers to the ability of a machine to emulate cognitive abilities such as decision-making and learning, in the tradition of Artificial Intelligence (AI). In this article, I shall intend automatism and autonomy as strictly related terms, in accordance with some use of these terms (see in the following). In contrast with this, robotic automatic systems are seen sometimes as carrying out only fixed or preset operations (e.g. industrial robots, which are not “intelligent” in the aforementioned sense), and as such, they are opposed to robotic autonomous systems, endowed with the above-mentioned cognitive abilities.

  2. See Ashby (1950) and Walter (1953).

  3. An excellent review is given by Lichocki et al. (2011). The issue of “roboethics” was raised at the 2004 First International Symposium on Roboethics in Sanremo, Italy (see Veruggio and Operto 2008).

  4. Mindell (2002) documents the development of feedback-based control systems before cybernetics, also for military purposes, beginning from the 1920s. Wiener and Bigelow’s work between 1940 and 1942 introduced an approach that would radically change automatic-control theory into the design of predictors, so allowing for the introduction of frequency analysis which today is known as classic control theory.

  5. The dispute between Rosenblueth and Wiener on the one side, and Taylor on the other, presents many interesting points which I pointed out elsewhere (Cordeschi 2002, chap. 4). See also Galison (1994).

  6. On this and other aspects see e.g. Galison (1994) and Conway and Siegelman (2005).

  7. AI’s first successes in those years were in the field of heuristic programming (see Cordeschi 2006). Samuel intervened to comment on Wiener’s claim, stating that the risks he referred to did not exist: a machine (actually, a computer program), Samuel objected, only limits itself to carrying out the “intentions” of its programmer (on this point, and the implication of machine learning, see Cordeschi and Tamburrini 2005, and above all Santoro et al. 2007; Tamburrini 2009).

  8. This is a situation where a human decision-maker predicts a particular scenario which he considers plausible, e.g. because he has experienced it in a simulation during military training: in this situation he ends up ignoring or undervaluing cues that seem to contradict him. Both the war games mentioned by Wiener and the case of Vincennes can fall within this box. Gray deals with the characteristics of these “synthetic environments” of the early 1990s. It thus seems that today’s dissent is the same as back then: for some the conclusion must be that “the human is the limiting factor”, whilst for others simulations do not take human factors into account and are a “complete and utter triumph of chilling analytic, cybernetic rationality over chaotic, real life human desperation. […] Virtual reality as a new way of knowledge [is] a new and terrible kind of transcendent military power” (see Gray 1997: 62).

  9. See Lichocki et al. (2011) on the different claims on the ethical and legal aspects of this issue, which I am not dealing with here, and also on the thesis of robots as “moral agents” (at a given “level of abstraction”: see Floridi and Sanders 2004).

  10. This testing was begun by IBM in 2001 (see http://www.research.ibm.com/autonomic/) and later by DARPA: see Canning (2005).

  11. Some of these claims are in the document of the Human Rights Watch Society from November 2012: see http://www.hrw.org/print/reports/2012/11/19/losing-humanity.

  12. According to Sunstein, the “availability heuristics” is at the core of the precautionary principle, to the extent that such a heuristics suggests considering only certain risk factors, for example, the more recent or the more impressive ones. A consequence may be that “sometimes a certain risk, said to call for precautions, is cognitively available, whereas other risks, including those associated with regulation itself, are not” (Sunstein 2005: 37).

  13. Notice that here I am referring to decision-making in “ill-defined problems” and real-life problems, not in the early AI “well-defined problems” mentioned by Krishnan (2009: 40).

  14. This process should not be confused with the experimentation and production of robotic weapons when it is recognised how they are simply badly designed or malfunctioning. One example comes from SWORDS robots, the production of which was initially stopped because of their malfunctioning (Krishnan 2009: 113).

  15. See Weiss (2003) for an insightful discussion on this point.

  16. See Caelli et al. (2011): forms of DoS have always been used in conventional warfare, for example, when trying to prevent the enemy from accessing food sources, transport facilities or telecommunication networks.

  17. See a list in progress of “Cyber incidents” at the URL http://csis.org/files/publication/120504_Significant_Cyber_Incidents_Since_2006.pdf.

References

  • Andress J, Winterfeld S (2011) Cyber warfare. Elsevier, Amsterdam

    Google Scholar 

  • Arkin R (2007) Governing lethal behavior: embedding ethics in a hybrid deliberative/reactive robot architecture, Technical Report GIT-GVU-07-11. http://www.cc.gatech.edu/ai/robot-lab/online-publications/formalizationv35.pdf

  • Arkin R (2009) Governing lethal behavior in autonomous robots. Chapman and Hall, Boca Raton

    Book  Google Scholar 

  • Asaro PM (2009) Modelling the moral user. IEEE Technol Soc 28(1):20–24. http://www.peterasaro.org/

    Google Scholar 

  • Ashby WR (1950) A new mechanism which shows simple conditioning. J Psychol 29:343–347

    Article  Google Scholar 

  • Brewster N, Adams N, Tendayi K, Smith J (2011) Last line of defence. Phys Special Top 10(1), P4_12. http://physics.le.ac.uk/journals/index.php/pst/issue/view/11

  • Caelli WJ, Raghavan SV, Bhaskar SM, Georgiades J (2011) Policy and law: denial of service threat. In: Raghavan SV, Dawson E (eds) An investigation into the detection and mitigation of denial of service (DoS) attacks. Springer-India, New Delhi

    Google Scholar 

  • Canning JS (2005) A definitive work on factors impacting the arming of unmanned vehicles, Final Report, Dahlgren Division Naval Surface Warfare Center, Dahlgren, Virginia. http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA436214%26Location=U2%26doc=GetTRDoc.pdf

  • Canning JS (2009) You’ve just be disarmed. IEEE Technol Soc 28(1):12–15

    Google Scholar 

  • Conway F, Siegelman J (2005) Dark hero of the information age: in search of Norbert Wiener, the father of cybernetics. Basic Books, New York

    Google Scholar 

  • Cordeschi R (1991), The discovery of the artificial. Some protocybernetic developments 1930-1940, Artificial intelligence and society, 5:218–238. Reprinted in Chrisley RL (ed), Artificial intelligence: critical concepts in cognitive science, vol 1. Routledge, London and New York, 301–326

  • Cordeschi R (2002) The discovery of the artificial: behavior mind and machines before and beyond cybernetics. Kluwer, Dordrecht

    Book  Google Scholar 

  • Cordeschi R. (2006), Searching in a maze, in search of knowledge, Lecture notes in computer science, vol. 4155, Springer, Berlin-Heidelberg, 1–23

  • Cordeschi R (2007) AI turns fifty: revisiting its origins. Appl Artif Intell 21:259–279

    Article  Google Scholar 

  • Cordeschi R, Tamburrini G (2005) Intelligent machinery and warfare: historical debates and epistemologically motivated concerns. In: Magnani L, Dossena R (eds) Computing, philosophy, and cognition. King’s College Publications, London, pp 1–23

    Google Scholar 

  • Floridi L, Sanders JW (2004) On the morality of artificial agents. Mind Mach 14:349–379

    Article  Google Scholar 

  • Galison P (1994) The ontology of the enemy: Norbert Wiener and the cybernetic vision, Critical Inquiry, 21:228–266. Reprinted in Franchi S, Bianchini F (eds), The search for a theory of cognition: early mechanisms and new ideas, Rodopi B.V., Amsterdam-New York, 53–88

  • Gray CH (1997) Postmodern war: the new politics of conflicts, The Guilford Press, New York. http://www.chrishablesgray.org/postmodernwar/index.html

  • Hull CL (1930) Knowledge and purpose as habit mechanisms. Psychol Rev 37:511–525

    Article  Google Scholar 

  • Kline RR (2011) Cybernetics, automata studies, and the Dartmouth conference on artificial intelligence. IEEE Ann Hist Comput 33:5–16

    Article  MathSciNet  Google Scholar 

  • Krishnan A (2009) Killer robots: legality and ethicality of autonomous weapons. Ashgate, Farnham

    Google Scholar 

  • Lichocki P, Kahn P Jr, Billard A (2011) A survey of the current ethical landscape in robotics. IEEE Robot Autom Mag 18(1):39–50

    Article  Google Scholar 

  • Lin P, Bekey G, Abney K (2008) Autonomous military robotics: risk, ethics, and design, US Department of Navy, Office of Naval Research. http://ethics.calpoly.edu/ONR_report.pdf

  • Marchant GE, Mossman KL (2004) Arbitrary and capricious: the precautionary principle in the European Union courts. AEI Press, Washington, D.C.

    Google Scholar 

  • Marchant G, Allenby B, Arkin R, Barrett E, Borenstein J, Gaudet L, Kittrie O, Lin P, Lucas G, O’Meara R, Silberman J (2011) International governance of autonomous military robots. Columbia Sci Technol Law Rev 12:272–315. http://www.ssrn.com/

    Google Scholar 

  • Matthews AH (1973) The wall of light: Nikola Tesla and the Venusian Space Ship/The Life of Nikola Tesla (Autobiography). Health Research, Pomeroy

    Google Scholar 

  • Mindell DA (2002) Between human and machine: feedback, control, and computing before cybernetics. John Hopkins University Press, Baltimore and London

    Google Scholar 

  • Newquist HP (1994) The brain makers. Sams, Indianapolis

    Google Scholar 

  • Numerico T, Cordeschi R (2008) Norbert Wiener’s vision of the “information society”. Ontol Stud/Cuadernos de Ontología 8:111–125

    Google Scholar 

  • Parasuraman R, Barne BK, Cosenzo K (2007) Adaptive automation for human-robot teaming in future command and control systems. Int C2 J 1(2):43–68

    Google Scholar 

  • Rosenblueth A, Wiener N (1950) Purposeful and non-purposeful behavior. Philos Sci 17:318–326

    Article  Google Scholar 

  • Rosenblueth A, Wiener N, Bigelow J (1943) Behavior, purpose and teleology. Philos Sci 10:18–24

    Article  Google Scholar 

  • Ross T (1935) Machines that think. A further statement. Psychol Rev 42:387–393

    Article  Google Scholar 

  • Santoro M, Marino D, Tamburrini G (2007) Learning robots interacting with humans: from epistemic risk to responsibility. Artif Intell Soc 22:301–314

    Article  Google Scholar 

  • Scambray J, McClure S, Kurtz G (2001) Hacking exposed: network security secrets and solutions. McGraw-Hill, Berkeley (Second Edition)

    Google Scholar 

  • Sharkey N (2008) Grounds for discrimination: autonomous robot weapons, RUSI Def Syst http://rusi.org/downloads/assets/23sharkey.pdf

  • Singer PW (2009) Robots at war: the new battlefield. Wilson Q. http://www.wilsonquarterly.com/essays/robots-war-new-battlefield

  • Sunstein C (2005) Laws of fear: beyond the precautionary principle. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Tamburrini G (2009) Robot ethics: a view from the philosophy of science. In: Capurro R, Nagenborg M (eds) Ethics and robotics. IOS Press, Amsterdam, pp 11–22

    Google Scholar 

  • Taylor R (1950) Purposeful and non-purposeful behaviour: a rejoinder. Philos Sci 17:327–332

    Article  Google Scholar 

  • Veruggio G, Operto F (2008) Roboethics: social and ethical implications of robotics. In: Siciliano B, Khatib O (eds) Handbook of robotics. Springer, Berlin, pp 1499–1524

    Chapter  Google Scholar 

  • Wagner AR, Arkin RC (2010) Acting deceptively: providing robots with the capacity for deception. Int J Soc Robotics 3:5–26

    Article  Google Scholar 

  • Walter WG (1953) The living brain. Duckworth, London

    Google Scholar 

  • Weiss C (2003) Scientific uncertainty and science-based precaution. Int Environ Agreem Politics Law Econ 3:137–166

    Article  Google Scholar 

  • Wiener N (1948) Cybernetics, or control and communication in the animal and in the machine, 2nd edn (1962, Introduction added). MIT Press, Cambridge

  • Wiener N (1950) The human use of human beings. Houghton Mifflin, Boston

    Google Scholar 

  • Wiener N (1960) Some moral and technical consequences of automation. Sci 131:1355–1358

    Article  Google Scholar 

  • Wiener N (1964) God and golem. MIT Press, Cambridge

    Google Scholar 

Download references

Acknowledgments

This work was supported by PRIN 2009 research funds of the Italian Ministry of Education, Universities and Research (MIUR).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Roberto Cordeschi.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Cordeschi, R. Automatic decision-making and reliability in robotic systems: some implications in the case of robot weapons. AI & Soc 28, 431–441 (2013). https://doi.org/10.1007/s00146-013-0500-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-013-0500-0

Keywords

Navigation