Skip to main content

Advertisement

Log in

Moral control and ownership in AI systems

  • OPEN FORUM
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

AI systems are bringing an augmentation of human capabilities to shape the world. They may also drag a replacement of human conscience in large chunks of life. AI systems can be designed to leave moral control in human hands, to obstruct or diminish that moral control, or even to prevent it, replacing human morality with pre-packaged or developed ‘solutions’ by the ‘intelligent’ machine itself. Artificial Intelligent systems (AIS) are increasingly being used in multiple applications and receiving more attention from the public and private organisations. The purpose of this article is to offer a mapping of the technological architectures that support AIS, under the specific focus of the moral agency. Through a literature research and reflection process, the following areas are covered: a brief introduction and review of the literature on the topic of moral agency; an analysis using the BDI logic model (Bratman 1987); an elemental review of artificial ‘reasoning’ architectures in AIS; the influence of the data input and the data quality; AI systems’ positioning in decision support and decision making scenarios; and finally, some conclusions are offered about regarding the potential loss of moral control by humans due to AIS. This article contributes to the field of Ethics and Artificial Intelligence by providing a discussion for developers and researchers to understand how and under what circumstances the ‘human subject’ may, totally or partially, lose moral control and ownership over AI technologies. The topic is relevant because AIS often are not single machines but complex networks of machines that feed information and decisions into each other and to human operators. The detailed traceability of input-process-output at each node of the network is essential for it to remain within the field of moral agency. Moral agency is then at the basis of our system of legal responsibility, and social approval is unlikely to be obtained for entrusting important functions to complex systems under which no moral agency can be identified.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. For the purpose of this article we are not entering into the debate of moral responsibility vs moral accountability (see Floridi and Sanders 2004 or Bauer 2018). Our goal is not going into the attribution of those but rather to highlight when (human) moral control might be lost.

  2. We keep here the word 'Desires' because it was used by Bratman and it continues being used in the related literature. However, it has a psychological-Humean taste that does not seem necessary. Maybe better than 'Desires' we should speak of 'Purposes', with the same logical content and less psychological charge.

  3. See for example Faucher and Roques (2018).

  4. We shall not enter in the much discussed issue of moral and legal responsibilities of AI-endowed systems. A good summary of both issues can be found in Chinen (2019).

  5. We use the word 'decision' here to group together what Bratman (1987) calls 'intentions' and 'plans'.

  6. This author was the first in adopting this terminology. It may also be the first paper to propose common sense reasoning ability as the key to AI.

  7. The contrary is also frequent: A machine under the regular control by a human operator, that passes to an automatic system in case of catastrophic failure of that human operator (for example: in case that the driver of a car becomes distracted or asleep and the car threatens to leave the road). This are rarely AI systems: they don't have time/experience enough to 'learn' from their own performance. They are rather emergency, fully programmed mechanisms.

Abbreviations

AAN:

Artificial neural networks

AI:

Artificial intelligence

AIS:

Artificial intelligence Systems

AS:

Autonomous systems

DSS:

Decision support systems

GAN:

Generative adversarial networks

ML:

Machine learning

MTT:

Moral turing test

RL:

Reinforcement learning

SAS:

Semi-autonomous systems

References

  • Anderson M, Anderson SL (2014) GenEth: a general ethical dilemma analyser. In Twenty-Eighth AAAI Conference on Artificial Intelligence.

  • Aristotle, Crisp R (2000) Nicomachean ethics. Cambridge University Press, Cambridge

    Google Scholar 

  • Arnold T, Scheutz M (2016) Against the moral Turing test: accountable design and the moral reasoning of autonomous systems. Ethics Inf Technol 18(2):103–115

    Google Scholar 

  • Arvan M (2018). Mental time-travel, semantic flexibility, and AI ethics. AI & SOCIETY, pp. 1–20

  • Autili M, Di Ruscio D, Inverardi P, Pelliccione P, Tivoli M (2019) A software exoskeleton to protect and support citizen’s ethics and privacy in the digital world. IEEE Access 7:62011–62021

    Google Scholar 

  • Balakrishnan A, Bouneffouf D, Mattei N, & Rossi F (2018) Using contextual bandits with behavioral constraints for constrained online movie recommendation. In IJCAI (pp. 5802–5804)

  • Barredo AA et al. (2019) Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. arXiv preprint arXiv:1910.10045

  • Bauer WA (2018) Virtuous vs utilitarian artificial moral agents. AI & SOCIETY, pp. 1–9

  • Boddington P (2017) Towards a code of ethics for artificial intelligence. Springer, Cham

    Google Scholar 

  • Bowles C (2018) Future ethics. NowNext Press, East Sussex

    Google Scholar 

  • Boyd D, Crawford K (2012) Critical questions for Big Data. Inform Commun Soc 15(5):662–679. https://doi.org/10.1080/1369118X.2012.678878

    Article  Google Scholar 

  • Bratman M (1987) Intention, plans, and practical reason. Harvard University Press, Cambridge

    Google Scholar 

  • Broussard M (2018) Artificial unintelligence: how computers misunderstand the world. MIT Press, Cambridge

    Google Scholar 

  • Caliskan Aylin, Bryson Joanna J, Narayanan Arvind (2017) Semantics derived automatically from language corpora contain human-like biases. Science 356(6334):183–186

    Google Scholar 

  • Carter SM, Mayes C, Eagle L, Dahl S (2017) A code of ethics for social marketing? Bridging procedural ethics and ethics-in-practice. J Nonprofit Public Sect Mark 29(1):20–38

    Google Scholar 

  • Charisi V, Dennis L, Fisher M, Lieck R, Matthias A, Slavkovik M, Sombetzki J, Winfield AF, Yampolskiy R (2017). Towards moral autonomous systems. arXiv preprint arXiv:1703.04741

  • Chinen M (2019) Law and autonomous machines: the co-evolution of legal responsibility and technology, UK. Edward Elgar Publishing, Cheltenham

    Google Scholar 

  • Ekbia H, Mattioli M, Kouper I, Arave G, Ghazinejad A, Bowman T, Suri VR, Tsou A, Weingart S, Sugimoto CR (2015) Big data, bigger dilemmas: a critical review. J Assn Inf Sci Tech 66:1523–1545. https://doi.org/10.1002/asi.23294

    Article  Google Scholar 

  • Faucher N, Roques M (2018) The ontology, psychology and axiology of habits (habitus) in medieval philosophy. Springer, New York

    Google Scholar 

  • Floridi L, Sanders JW (2004) On the morality of artificial agents. Mind Mach 14(3):349–379

    Google Scholar 

  • Friedler SA, Scheidegger C, Venkatasubramanian S, Choudhary S, Hamilton EP, Roth D (2018) A comparative study of fairness-enhancing interventions in machine learning. arXiv preprint arXiv:1802.04422

  • Gelernter David Hillel (1992) Mirror worlds. Oxford University Press, Oxford

    Google Scholar 

  • Gerdes A, Øhrstrøm P (2015) Issues in robot ethics seen through the lens of a moral Turing test. J Inform Commun Ethics Soc

  • Gibson S (2019) Arguing, obeying and defying: a rhetorical perspective on Stanley Milgram’s obedience experiments. Cambridge University Press, New York

    Google Scholar 

  • Greene D, Hoffmann AL, Stark L (2019) Better, nicer, clearer, fairer: a critical assessment of the movement for ethical artificial intelligence and machine learning. In Proceedings of the 52nd Hawaii International Conference on System Sciences

  • Hacker P (2018) Teaching fairness to artificial intelligence: existing and novel strategies against algorithmic discrimination under EU law. Common Market Law Review 55(4):1143–1185

    Google Scholar 

  • Hester PT, Adams KM (2017) Systemic decision-making fundamentals for addressing problems and messes. Springer, New York

    MATH  Google Scholar 

  • AI HLEG, High-Level Expert Group on Artificial Intelligence (2018) Draft ethics guidelines for trustworthy AI. https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=57112

  • Jobin A, Ienca M, Vayena E (2019) The global landscape of AI ethics guidelines. Nat Mach Intell 1(9):389–399

    Google Scholar 

  • Kahneman D (2011) Thinking, fast and slow. Farrar, Straus and Giroux, New York

    Google Scholar 

  • Loreggia A, Mattei N, Rossi F, Venable KB (2018) Preferences and ethical principles in decision making. In 2018 AAAI Spring Symposium Series

  • Markham AN, Tiidenberg K, Herman A (2018) Ethics as methods: doing ethics in the Era of big data research—introduction. Soc Media Soc. https://doi.org/10.1177/2056305118784502

    Article  Google Scholar 

  • McCarthy (1958). “Programs with common sense.” Proceedings of Teddington Conference on the mechanisation of thought processes

  • McQuillan D (2018) People’s councils for ethical machine learning. Soc Media Soc 4(2):2056305118768303

    Google Scholar 

  • Metcalf J, Emily FK, Danah B (2019) Perspectives on big data, ethics, and society. Council for big data, ethics, and society. https://bdes.datasociety.net/council-output/perspectives-on-big-data-ethics-and-society/

  • Meyer, John-Jules CH, Broersen Jan, Herzig Andreas (2015) BDI logics. In: Ditmarsch HV (ed) Handbook of epistemic logic. College Publications, London

    Google Scholar 

  • Mingers J, Walsham G (2010) Toward ethical information systems: the contribution of discourse ethics. Mis Quart 34(4):833–854

    Google Scholar 

  • Mnich M (2018) Big data algorithms beyond machine learning. KI-Künstliche Intelligenz 32(1):9–17

    MathSciNet  Google Scholar 

  • Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, Graves A, Riedmiller M, Fidjeland AK, Ostrovski G, Petersen S, Beattie C, Sadik A, Antonoglou I, King H, Kumaran D, Wierstra D, Legg S, Hassabis D (2015) Human-level control through deep reinforcement learning. Nature 518(7540):529–533

    Google Scholar 

  • Moor JH (2006) The nature, importance, and difficulty of machine ethics. IEEE Intell Syst 21(4):18–21

    Google Scholar 

  • Nath R, Sahu V (2017) The problem of machine ethics in artificial intelligence. AI Soc 35(1):103–111

    Google Scholar 

  • Plato et al (2018) The republic. Cambridge University Press, Cambridge

    MATH  Google Scholar 

  • AI NOW Report (2018) Artificial Intelligence Institute. New York

  • Rossi F, Mattei N (2019). Building ethically bounded AI. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 33, pp. 9785–9789)

  • Russell SJ, Norvig P, Davis E (2010) Artificial intelligence: a modern approach, 3rd edn. Prentice Hall, Upper Saddle River

    MATH  Google Scholar 

  • Salgues B (2018) Society 5.0. Wiley, Hoboken

    Google Scholar 

  • Shortliffe EH (1976) Computer-based medical consultations: MYCIN. Elsevier, Amsterdam

    Google Scholar 

  • Silver N (2012) The signal and the noise: why so many predictions fail–but some don’t. Penguin Press, New York

    Google Scholar 

  • Smith G (2018) The AI delusion. Oxford University Press, Oxford

    Google Scholar 

  • Sutton RS, Barto AG (2018) Reinforcement learning: an introduction, 2nd edn. MIT Press, Cambridge

    MATH  Google Scholar 

  • Tiberius V (2015) Moral psychology: a contemporary introduction. Taylor & Francis Group, New York

    Google Scholar 

  • Tomasello M (2018) Precís of a natural history of human morality. Philos Psychol 31(5):661–668

    Google Scholar 

  • Torrance S (2013) Artificial agents and the expanding ethical circle. AI & Soc 28(4):399–414

    Google Scholar 

  • Vakkuri V, Abrahamsson P (2018) The key concepts of ethics of artificial intelligence. In 2018 IEEE International Conference on Engineering, Technology and Innovation (ICE/ITMC) (pp. 1–6). IEEE

  • Wallach W (2008) Implementing moral decision making faculties in computers and robots. AI & Soc 22(4):463–475

    Google Scholar 

  • Wallach W, Allen C (2009) Moral machines: teaching robots right from wrong. Oxford University Press, Oxford; New York

    Google Scholar 

  • Wallach W, Allen C (2012) Hard problems: framing the chinese room in which a robot takes a moral turing test. University of Birmingham, AISB/IACAP, p 5

    Google Scholar 

  • Winfield AF, Jirotka M (2018) Ethical governance is essential to building trust in robotics and artificial intelligence systems. Philos Trans R Soc A 376(2133):20180085

    Google Scholar 

  • Winfield AF, Michael K, Pitt J, Evers V (2019) Machine ethics: the design and governance of ethical AI and autonomous systems. Proc IEEE 107(3):509–517

    Google Scholar 

  • Wu YH, Lin SD (2018) A low-cost ethics shaping approach for designing reinforcement learning agents. In Thirty-Second AAAI Conference on Artificial Intelligence

  • Yu H, Shen Z, Miao C, Leung C, Lesser VR, Yang Q (2018) Building ethics into artificial intelligence. arXiv preprint arXiv:1812.02953

  • Zilberstein S (2015) Building strong semi-autonomous systems. Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence

  • Zubiri X (1986) Sobre el hombre. Alianza, Madrid

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Javier Camacho Ibáñez.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gonzalez Fabre, R., Camacho Ibáñez, J. & Tejedor Escobar, P. Moral control and ownership in AI systems. AI & Soc 36, 289–303 (2021). https://doi.org/10.1007/s00146-020-01020-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-020-01020-z

Keywords

Navigation