Skip to main content

Advertisement

Log in

Ambient Intelligence, Criminal Liability and Democracy

  • Original Paper
  • Published:
Criminal Law and Philosophy Aims and scope Submit manuscript

Abstract

In this contribution we will explore some of the implications of the vision of Ambient Intelligence (AmI) for law and legal philosophy. AmI creates an environment that monitors and anticipates human behaviour with the aim of customised adaptation of the environment to a person’s inferred preferences. Such an environment depends on distributed human and non-human intelligence that raises a host of unsettling questions around causality, subjectivity, agency and (criminal) liability. After discussing the vision of AmI we will present relevant research in the field of philosophy of technology, inspired by the post-phenomenological position taken by Don Ihde and the constructivist realism of Bruno Latour. We will posit the need to conceptualise technological normativity in comparison with legal normativity, claiming that this is necessary to develop democratic accountability for the implications of emerging technologies like AmI. Lastly we will investigate to what extent technological devices and infrastructures can and should be used to achieve compliance with the criminal law, and we will discuss some of the implications of non-human distributed intelligence for criminal liability.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. ISTAG (2001), Aarts and Marzano (2003).

  2. Lessig (1999), p. 154.

  3. See the Communication of the European Commission to the European Parliament (2007) and the draft version of the European Policy Outlook RFID (2007).

  4. Custers (2004), Hildebrandt (2006b).

  5. Ambient Intelligence cannot be equated with Artificial Intelligence (AI) as criticised by, e.g., Dreyfus in his groundbreaking (1972). The reason is 2-fold: first, AI research has incorporated some of the criticisms and downsized some of its claims regarding machine intelligence, second, in this case AI techniques are used to create AmI, which is more contextual, user-centric and practice oriented than ‘old-style’ AI.

  6. ITU (2005).

  7. Kephart and Chess (2003).

  8. Quoted in: Aarts and Marzano (2003), p. 259. Wired Magazine (online version: http://www.wired.com/wired/) is a famous magazine which reports on how technology affects culture, politics and economy. It tends to creative techno-optimism—as may be guessed from this quotation from its founding father.

  9. Lessig (1999), p. 154. Cp. Brey (2006).

  10. Solove (2004).

  11. Sunstein (2001).

  12. This is the title of a book on non-human action that combines and critically assesses the work of continental and Anglo-American philosophers of technology: in particular Don Ihde and Bruno Latour. Verbeek (2005). Cf. Ihde (1990).

  13. This—in itself—is not a new situation. In the philosophy of mind swords have been crossed on this issue. Brain scientists have detected many ‘autonomic’ processes supposedly ‘causing’ our actions before conscious reflection comes in. On the complex relationship between conscious reflection and intentional action in the case of human subjects see Bayne (2006). Bayne critically discusses the conclusions drawn from the experiments by Libet, who is often claimed to have proven that brain states precede the conscious decision to act, supposedly meaning that they in fact ‘cause’ our actions. The fact that we experience our actions as the result of our own free will is then judged to be an illusion, see Haggard and Libet (2001). A more interesting position is developed by brain scientist and philosopher Varela in Varela et al. (1991).

  14. Cp. Bourcier (2001).

  15. Garreau (2005).

  16. About this transition Lévy (1990). About the possible consequences for the articulation of law, cp. Hildebrandt (2007, to be published).

  17. Latour (1999).

  18. In philosophy an agent could be defined as a subject capable of action. When acting the agent becomes an actor. The notion of action is traditionally connected with intention. This is, however, not necessary. One could discriminate between action as a generic notion and intentional action as a particular type of action. Within the field of computer science software programs or nodes in a network are often called agents. See footnote 19 below.

  19. Lévy (1990) (translation mh), p. 157.

  20. Within computer science, a definition of a MAS could be: ‘An agent can be a physical or virtual entity that can act, perceive its environment (in a partial way) and communicate with others, is autonomous and has skills to achieve its goals and tendencies. It is in a multi-agent system (MAS) that contains an environment, objects and agents (the agents being the only ones to act), relations between all the entities, a set of operations that can be performed by the entities and the changes of the universe in time and due to these actions’, see Ferber (1999), as discussed in Rouchier (2001).

  21. Cf. the integration of computer science and sociological perspectives in: Meister et al. (2007).

  22. See again Varela et al. (1991) for a further elaboration of this position and of what they call ‘enaction’, being their paradigm for understanding the embodied mind.

  23. Ibid (translation mh), pp. 169–170. Cp. Friedrich Nietzsche in Die Fröhliche Wissenschaft, 1882, nr. 217: ‘Before the effect one believes in other causes than after the effect (translation mh)’.

  24. It may be interesting to discriminate—with Lévy—between types of action that merely consist of the realisation of a predefined possibility (implying mechanical application) and those that involve the actualisation of an underdetemined possibility (implying a measure of creativity and unpredictability). See Lévy (1998).

  25. Hart (1994/1961). Rules of competence are ultimately a matter of practice, an example of legal authorities declaring themselves legal authorities (Münchhausen is mythical but very real). Cp. Hart’s ultimate rule of recognition.

  26. Searle (1969), Mittag (2006).

  27. One could think of a situation in which the violation of a constitutive rule results in damage because the objective of the rule is not achieved. If ‘causing’ such damage is criminalised the violation of this constitutive rule does lead to criminal liability.

  28. I am appealing to Davidson’s principle of charity (or rational accommodation), claiming that the argument is coherent and refers to a reality that may overtake us sooner rather than later.

  29. Which could be entered into in earlier times without complying with such rules.

  30. Actually, it would be more precise to note that contemporary legal normativity, articulated in the technology of the script, cannot enforce compliance in the way that some technologies could. For an analysis of the transition from oral law to written law cp. Hildebrandt (2007).

  31. Lessig (1999), chapters 3 and 4. For instance, the anonymity and subsequent lack of accountability of behaviour on the Internet is a consequence of its design.

  32. Tien (2004), Brownsword (2005).

  33. The idea that democracy is at stake wherever people suffer or enjoy the indirect consequences of an action can be found in Dewey (1927).

  34. For an out-of-the-box way of rethinking democracy cp. Latour (2004).

  35. Verbeek (2005), pp. 6–9.

  36. ‘Mobile phone operators report their biggest profits from the runaway success of short text messages’, Damina Mycroft, ‘Intrinsic and Extrinsic Intelligence', Aarts and Marzano (2003), p. 256.

  37. At least, this is what Montesquieu suggested in his The Spirits of the Laws, XI, 3, Paris 1748: ‘liberty can consist only in the power of doing what we ought to will, and in not being constrained to do what we ought not to will’. Translation available at: http://www.agh-attorneys.com/4_charles_montesquieu_SOTL.htm

  38. An example of such a plea is the—unsuccessful—plea of a husband who claimed that his raping his wife did not count as rape under the common law. See (1995) 21 EHRR 363, wherein the Court ruled that in this case the husband having sex with his wife in fact counted as rape.

  39. To remedy this deficit we may have to learn to articulate some of the protective aspects of law in the technologies we aim to regulate, cf. Hildebrandt (2008, to be published).

  40. About the notion of a script, see Latour (1993). About the fact that the actual normative impact is constituted by the way humans actually attach themselves to a technology, see Verbeek (2005), p. 217 on the multistability of technological mediation and the interpretive flexibility this provides. The fact that a moral evaluation cannot be made in advance does not imply that we can sit back until the consequences are in force. It rather calls for speculative exploration of such consequences and should involve some kind of democratic participation of those who may suffer or enjoy them.

  41. Kranzberg (1986).

  42. Verbeek (2005), p. 11.

  43. Ibid. Verbeek provides a critical reconstruction of the work of Jaspers, Marcuse, and Heidegger before moving into Ihde, Latour and Winner.

  44. Empirical speculation involves actual knowledge of a specific emerging technology and of the environment into which it may be introduced, it involves educated guessing and trained intuition instead of both general deductive reasonings and reductive quantitative social research. Cp. the way Isabelle Stengers uses the term speculation in: Stengers (2000).

  45. This conception of law has been developed by Foqué and ‘t Hart (1990). See also Hildebrandt (2006a).

  46. This instrumentality differs from instrumentalism the way the pragmatist consequentialism of Peirce and Dewey differs from a utilitarian pragmatism in which means and ends are separated. In that sense prevention regards the legitimate anticipation of the consequences of one’s actions rather than an amoral calculation of costs and benefits. This means that we agree with Duff’s rejection of utilitarian consequentialism (Duff 2001, pp. 3–19), without reverting to a Kantian position in which only intentions count.

  47. Interactive computing depends on deliberate human-computer interaction, proactive computing does not. It builds on the real time monitoring and machine anticipation of inferred human preferences. Cp. Tennenhouse (2000).

  48. Stalder (2000–2003).

  49. A common definition of a node can be found at http://en.wikipedia.org/wiki/Node_(IT): A node is a device that is connected as part of a computer network. (…) Nodes can be computers, personal digital assistants (PDAs), cell phones, or various other network appliances, (…). We are using the term in a more generic way, integrating social network theory with network theory; a node in our case is the nexus of different lines of communication in a networked environment, meaning that nodes can also be human individuals.

  50. Miller (2006).

  51. This brings us well into the nexus of the philosophy of mind and the neurosciences, cp. Varela et al. (1991).

  52. Duff (2001), p. 79.

  53. In the community of researchers on Artificial Intelligence (AI) and Computational Intelligence (CI) claims concerning the emergence of artificial consciousness are paramount. CI is defined as involving ‘iterative development or learning (…). Learning is based on empirical data and is associated with non-symbolic AI, scruffy AI and soft computing. Methods mainly include: neural networks, fuzzy systems and evolutionary computations’, see http://en.wikipedia.org/wiki/Artificial_intelligence.

  54. Plessner (1975).

References

  • Aarts, E., & Marzano, S. (Eds.) (2003). The new everyday. Views on ambient intelligence. Rotterdam: 010 Publishers.

    Google Scholar 

  • Bayne, T. (2006). Phenomenology and the feeling of doing: Wegner on the conscious will. In S. Pockett, W. P. Banks, & S. Gallagher (Eds.), Does consciousness cause behavior? An investigation of the nature of volition. Cambridge, MA: MIT Press.

    Google Scholar 

  • Bourcier, D. (2001). De l’intelligence artificielle à la personne virtuelle: émergence d’une entité juridique? Droit et Société, 49, 847–871.

    Google Scholar 

  • Brey, P. (2006). Freedom and privacy in ambient intelligence. Ethics and Information Technology, 7, 157–166.

    Article  Google Scholar 

  • Brownsword, R. (2005). Code, control, and choice: Why East is East and West is West. Legal Studies, 25(1), 1–22.

    Article  Google Scholar 

  • Communication of the European Commission to the European Parliament (2007). RFID in Europe: Steps towards a policy framework, COM2007(96) final of March 2007.

  • Custers, B. (2004). The power of knowledge. Ethical, legal, and technological aspects of data mining and group profiling in epidemiology. Nijmegen: Wolf Legal Publishers.

    Google Scholar 

  • Dewey, J. (1927). The public & its problems. Chicago: The Swallow Press.

    Google Scholar 

  • Dreyfus, H. (1972). What computers can’t do. New York: Harper and Row.

    Google Scholar 

  • Duff, R. A. (2001). Punishment, communication, and community. Oxford: Oxford University Press.

    Google Scholar 

  • European Policy Outlook RFID (2007). Draft version, the working document for the expert conference on ‘RFID: Towards the Internet of Things’ in June 2007 in Berlin.

  • Ferber, J. (1999). Multi-agent system: An introduction to distributed artificial intelligence. Harlow: Addison Wesley Longman.

    Google Scholar 

  • Foqué, R., & ‘t Hart, A. C. (1990). Instrumentaliteit en rechtsbescherming. Arnhem: Gouda Quint.

    Google Scholar 

  • Garreau, J. (2005). Radical evolution. The promise and peril of enhancing our minds, our bodies—and what it means to be human. New York: Doubleday.

    Google Scholar 

  • Haggard, P., & Libet, B. (2001). Conscious intention and brain activity. Journal of Consciousness Studies, 8(11), 47–63.

    Google Scholar 

  • Hart, H. L. A. (1994/1961). The concept of law. Oxford: Clarendon Press.

    Google Scholar 

  • Hildebrandt, M. (2006a). Trial and ‘fair trial’: From peer to subject to citizen. In A. Duff, L. Farmer, S. Marshall, & V. Tadros (Eds.), The trial on trial. Judgment and calling to account (Vol. 2, pp. 15–37). Oxford: Hart.

    Google Scholar 

  • Hildebrandt, M. (2006b). From data to knowledge: The challenges of a crucial technology. In DuD—Datenschutz und Datensicherheit (Vol. 30, No. 9, pp. 548–552).

  • Hildebrandt, M. (2007). Technology and the end of law. In E. Claes & B. Keirsbilck (Eds.), The limits of (the rule of) law. Oxford: Hart (to be published).

  • Hildebrandt, M. (2008). A vision of ambient law. In R. Brownsword & K. Yeung (Eds.), Regulating technologies. Oxford: Hart (to be published).

  • Ihde, D. (1990). Technology and the lifeworld. From garden to earth. Bloomingron: Indiana University Press.

    Google Scholar 

  • ISTAG (2001). Scenarios for ambient intelligence in 2010. Information Society Technology Advisory Group, available at: http://www.cordis.lu/ist/istag-reports.htm.

  • ITU (2005). The internet of things. Geneva: International Telecommunications Union (ITU).

  • Kephart, J. O., & Chess, D. M. (2003). The vision of autonomic computing. Computer, 36, 41–50.

    Article  Google Scholar 

  • Kranzberg, M. (1986). Technology and history: ‘Kranzberg’s laws’. Technology and Culture, 27, 544–560.

    Article  Google Scholar 

  • Latour, B. (1993). La Clef de Berlin et autres leçons d’un amateur de sciences. Paris: La Découverte.

    Google Scholar 

  • Latour, B. (1999). Pandora’s hope. Essays on the reality of science studies. Cambridge: Harvard University.

    Google Scholar 

  • Latour, B. (2004). Politics of nature. How to bring the sciences into democracy (trans. by Catherine Porter). Harvard University Press.

  • Lessig, L. (1999). Code and other laws of cyberspace. New York: Basic Books.

    Google Scholar 

  • Lévy, P. (1990). Les technologies de l’intelligence. L’avenir de la pensée à l’ère informatique. Paris: La Découverte.

    Google Scholar 

  • Lévy, P. (1998). Becoming virtual. Reality in the digital age. New York: Plenum Trade.

    Google Scholar 

  • Meister, M., Schröter, K., et al. (2007). Construction and evaluation of social agents in hybrid settings: Approach and experimental results of the INKA project. Journal of Artificial Societies and Social Simulation, 10(1), http://www.jasss.soc.surrey.ac.uk/10/1/4.html.

  • Miller, S. (2006). Collective moral responsibility: An individualist approach. Midwest Studies in Philosophy, 30(1), 176–194.

    Article  Google Scholar 

  • Mittag, M. (2006). A legal theoretical approach to criminal procedure law: The structure of rules in the German code of criminal procedure. German Law Journal, 7(8), 637–646.

    Google Scholar 

  • Plessner, H. (1975). Die Stufen des Organischen under der Mensch. Einleitung in die philosophische Anthropologie. Frankfurt: Suhrkamp.

    Google Scholar 

  • Rouchier, J. (2001). Review of Jacques Ferber’s multi-agent system: An introduction to distributed artificial intelligence. Journal of Artificial Societies and Social Simulation, 4(2), http://www.jasss.soc.surrey.ac.uk/4/2/reviews/rouchier.html.

  • Searle, J. R. (1969). Speech acts, an essay in the philosophy of language. Cambridge: Cambridge University Press.

    Google Scholar 

  • Solove, D. J. (2004). The digital person. Technology and privacy in the information age. New York: New York University Press.

    Google Scholar 

  • Stalder, F. (2000–2003). Beyond constructivism: Towards a realistic realism. A review of Bruno Latour’s Pandora’s Hope. The Information Society, 16, 245–247.

    Article  Google Scholar 

  • Stengers, I. (2000). Another look: Relearning to laugh. Hypatia, 15(4), 41–54.

    Google Scholar 

  • Sunstein, C. (2001). Republic.com. Princeton: Princeton University Press.

    Google Scholar 

  • Tennenhouse, D. (2000). Proactive computing. Communications of the ACM, 43(5), 43–50.

    Article  Google Scholar 

  • Tien, L. (2004). Architectural regulation and the evolution of social norms. International Journal of Communications Law & Policy (9), Special Issue on Cybercrime.

  • Varela, F. J., Thompson, E., et al. (1991). The embodied mind. Cognitive science and human experience. Cambridge: MIT.

    Google Scholar 

  • Verbeek, P.-P. (2005). What things do. Philosophical reflections on technology, agency and design. Pennsylvania State University Press.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mireille Hildebrandt.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Hildebrandt, M. Ambient Intelligence, Criminal Liability and Democracy. Criminal Law, Philosophy 2, 163–180 (2008). https://doi.org/10.1007/s11572-007-9042-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11572-007-9042-1

Keywords

Navigation