Skip to main content
Log in

From machine ethics to computational ethics

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

Research into the ethics of artificial intelligence is often categorized into two subareas—robot ethics and machine ethics. Many of the definitions and classifications of the subject matter of these subfields, as found in the literature, are conflated, which I seek to rectify. In this essay, I infer that using the term ‘machine ethics’ is too broad and glosses over issues that the term computational ethics best describes. I show that the subject of inquiry of computational ethics is of great value and indeed is an important frontier in developing ethical artificial intelligence systems (AIS). I also show that computational is a distinct, often neglected field in the ethics of AI. In contrast to much of the literature, I argue that the appellation ‘machine ethics’ does not sufficiently capture the entire project of embedding ethics into AI/S, and hence the need for computational ethics. This essay is unique for two reasons; first, it offers a philosophical analysis of the subject of computational ethics that is not found in the literature. Second, it offers a finely grained analysis that shows the thematic distinction among robot ethics, machine ethics and computational ethics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. As is well known, Asimov’s three laws of robotics are as follows: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. The fourth law, which is also referred to as the zeroth law states that a robot may not harm humanity, or, by inaction, allow humanity to come to harm.

  2. Clarke identifies certain constraints to Asimov’s laws of robotics, which would make it computationally difficult to implement. These are the ambiguity and cultural dependence of terms used in the formulation of the laws; the role of judgment in decision-making, which would be quite tricky to implement given the degree of programming required in decision-making; the sheer complexity, this also bothers on having to account for all possible scenarios; the scope for dilemma and deadlock, the robot autonomy, audit of robot compliance, and scope of adaptation.

  3. Smith and Anderson in the 2014 published Pew Research titled “AI, Robotics, and the Future of Jobs”, discuss the economic and social impact of AI on society. As we continue to build more autonomous intelligent systems, we are likely to delegate responsibilities around security, environment, healthcare, food production etc. to these systems. These all raise concerns about the impact of AI on jobs and society.

  4. Closely linked to the moral issue with AI is the debate around its legal status, agency, and responsibility. With the imminent disruption in the transport sector by the introduction of self-driving cars, questions around who bears responsibility for harm caused by a self-driving car comes to mind. Also, there are more technical questions around insurance and liabilities that have to be addressed Chopra and White (2011).

  5. In much of the literature, Asmivo is unarguably seen as a forerunner in the development of guidelines to regulate the operations of autonomous intelligent systems.

  6. We might consider, for instance, a desktop printer as a machine but it is uniquely different from a self-driving car, which can also be said to be a machine. The difference here is the degree of autonomy of these systems and the attendant moral burden they carry. The actions of the printer may have a moral impact; an example is if it is used to print documents for whistleblowing activities. On the other hand, a self-driving car appears to carry a greater ethical burden because it is active in the moral decision-making process. As Lumbreras (2017) mentions, the goal of machine ethics is ultimately to ‘endow’ self-governing systems with ethical comportments. In the case above, a desktop printer would not count as ‘self-governing’ but a self-driving car would.

  7. Putting Moor’s alongside Asaro’s classification, amoral agents are those I have identified as ethical impact agents. Systems with moral significance are represented as implicit moral agents. Explicit moral agents are systems with dynamic moral intelligence that can make moral decisions while employing moral principles explicitly. The final type of moral agent identified by Moore is the full ethical agent, which shares human-like properties.

  8. In explicating the importance of these criteria, Floridi and Sanders note: “(a) Interactivity means that the agent and its environment (can) act upon each other… (b) Autonomy means that the agent is able to change state without direct response to interaction: it can perform internal transitions to change its state. So an agent must have at least two states. This property imbues an agent with a certain degree of complexity and decoupled-ness from its environment. (c) Adaptability means that the agent’s interactions (can) change the transition rules by which it changes state. This property ensures that an agent might be viewed, at the given LoA, as learning its own mode of operation in a way, which depends critically on its experience” (Floridi and Sanders 2004, p. 7).

  9. In answering the question of how to go about the embedding of ethical principles into AIS, it behoves machine ethicists to decide on the best approaches to use. So far, three approaches standout, top-down, bottom-up and hybrid. In the top-down approach, an ethical principle is selected and applied in a theoretical form to the AIS using a rule-based method such as Asimov’s three laws of robots (Allen et al. 2005). The bottom-up approach, on the other hand, does not refer to any particular ethical principle; instead, through machine learning, these intelligent systems can learn subsets of ethical principles and over time integrate these into a whole and possibly unique ethical system (Wallach and Allen 2008). Then there is the hybrid approach, which simply is the fusion of the two approaches.

  10. Parthmore and Whitby make a strong case for why embodiment constitutes an important aspect in the project to build artificial moral agents. This is because embodiment appeals to the human tendency to relate and nurture, and does so regardless of the form these systems come in—biological or synthetic. Usually, we tend to care for things we anthropomorphise.

References

  • Aaby AA (2005) Computational ethics. Creative commons attribution license. https://pdfs.semanticscholar.org/2db4/e8051cbbab4b916520d9ff15ef68a315a21b.pdf. Accessed 25 Sept 2019.

  • Abney K (2012) Robotics, ethical theory, and metaethics: a guide for the perplexed. In: Lin P, Abney K, Bekey GA (eds) Robot ethics: the ethical and social implications of robotics. MIT Press, Cambridge, pp 35–52

    Google Scholar 

  • Allen C, Wallach W (2012) Moral machines: contradiction in terms or abdication of human responsibility. In: Lin P, Abney K, Bekey GA (eds) Robot ethics: the ethical and social implications of robotics. MIT Press, Cambridge, pp 55–68

    Google Scholar 

  • Allen C, Varner G, Zinser J (2000) Prolegomena to any future artificial moral agent. J Exp Theor Artif Intell 12(3):251–261

    MATH  Google Scholar 

  • Allen C, Smit I, Wallach W (2005) Artificial morality: top-down, bottom-up, and hybrid approaches. Ethics Inf Technol 7(3):149–155

    Google Scholar 

  • Allen C, Wallach W, Smit I (2006) Why machine ethics? IEEE Intell Syst 21(4):12–17

    Google Scholar 

  • Anderson RE (1992) Social impacts of computing: codes of professional ethics. Soc Sci Comput Rev 10(4):453–469

    Google Scholar 

  • Anderson M, Anderson SL (2007) Machine ethics: creating an ethical intelligent agent. AI Mag 28(4):15–15

    Google Scholar 

  • Anderson M, Anderson S, Armen C (2005) Towards machine ethics: implementing two action-based ethical theories. In Proceedings of the AAAI 2005 Fall Symposium on Machine Ethics, (pp. 1–7).

  • Anderson M, Anderson SL, Armen C (2006) An approach to computing ethics. IEEE Intell Syst 21(4):56–63

    Google Scholar 

  • Arnold T, Scheutz M (2016) Against the moral Turing test: accountable design and the moral reasoning of autonomous systems. Ethics Inf Technol 18(2):103–115

    Google Scholar 

  • Asaro PM (2006) What should we want from a robot ethic? Int Rev Inf Ethics 6(12):9–16

    Google Scholar 

  • Asaro P (2012) On banning autonomous weapon systems: human rights, automation, and the dehumanization of lethal decision-making. Int Rev Red Cross 94(886):687–709

    Google Scholar 

  • Asimov I (1950) Runaround. I, robot. Bantam Dell, New York

    Google Scholar 

  • Baral C, Gelfond M (1994) Logic programming and knowledge representation. J Logic Program 19:73–148

    MathSciNet  MATH  Google Scholar 

  • Boddington P (2017) Towards a code of ethics for artificial intelligence. Springer, Cham

    Google Scholar 

  • Borenstein J, Pearson Y (2010) Robot caregivers: harbingers of expanded freedom for all? Ethics Inf Technol 12(3):277–288

    Google Scholar 

  • Bostrom N (2003) Ethical issues in advanced artificial intelligence. Sci Fiction Philos Time Travel Superintell 2003:277–284

    Google Scholar 

  • Bostrom N (2016) Ethical issues in advanced artificial intelligence. In: Schneider S (ed) Science fiction and philosophy: from time travel to superintelligence. Wiley, Oxford, pp 277–284

    Google Scholar 

  • Bostrom N, Yudkowsky E (2014) The ethics of artificial intelligence. In: Frankish K, Ramsey WM (eds) The Cambridge handbook of artificial intelligence. Cambridge University Press, Cambridge, pp 316–334

    Google Scholar 

  • Boyles RJM (2018) A case for machine ethics in modelling human-level intelligent agents. Kritike: Online J Philos 12(1): 182–200.

  • Boyles RJM, Joaquin JJ (2019) Why friendly AIs won’t be that friendly: a friendly reply to Muehlhauser and Bostrom. AI Soc. https://doi.org/10.1007/s00146-019-00903-0

    Article  Google Scholar 

  • Bozdag E (2013) Bias in algorithmic filtering and personalization. Ethics Inf Technol 15(3):209–227

    Google Scholar 

  • Brundage M (2014) Limitations and risks of machine ethics. J Exp Theor Artif Intell 26(3):355–372

    Google Scholar 

  • Bryson JJ (2010) Robots should be slaves. In: Wilks Y (ed) Close engagements with artificial companions: key social, psychological, ethical and design issues. John Benjamins Publishing Company, Amsterdam, pp 63–74

    Google Scholar 

  • Bynum TW (2001) Computer ethics: its birth and its future. Ethics Inf Technol 3(2):109–112

    Google Scholar 

  • Cardon A (2006) Artificial consciousness, artificial emotions, and autonomous robots. Cogn Process 7(4):245–267

    Google Scholar 

  • Chan D (2017) The AI that has nothing to learn from humans. The Atlantic. https://www.theatlantic.com/technology/archive/2017/10/alphago-zero-the-ai-that-taught-itself-go/543450/. Accessed 25 Sept 2019.

  • Chella A, Manzotti R (2009) Machine consciousness: a manifesto for robotics. Int J Mach Conscious 1(01):33–51

    Google Scholar 

  • Chella A, Manzotti R (2013) Artificial consciousness. Imprints Academics: Exter, UK

    Google Scholar 

  • Chopra S (2010) Rights for autonomous artificial agents? Commun ACM 53(8):38–40

    Google Scholar 

  • Chopra S, White LF (2011) A legal theory for autonomous artificial agents. University of Michigan Press, Michigan

    Google Scholar 

  • Chung CA (ed) (2003) Simulation modelling handbook: a practical approach. CRC Press, London

    MATH  Google Scholar 

  • Clarke R (1993) Asimov’s laws of robotics: implications for information technology. Part 1. Computer 26(12):53–61

    Google Scholar 

  • Clarke R (1994) Asimov’s laws of robotics: implications for information technology. Part 2. Computer 27(1):57–66

    Google Scholar 

  • Clowes R, Torrance S, Chrisley R (2007) Machine consciousness. J Conscious Stud 14(7):7–14

    Google Scholar 

  • Coeckelbergh M (2010a) Moral appearances: emotions, robots, and human morality. Ethics Inf Technol 12(3):235–241

    Google Scholar 

  • Coeckelbergh M (2010b) Robot rights? Towards a social-relational justification of moral consideration. Ethics Inf Technol 12(3):209–221

    Google Scholar 

  • Danaher J (2017) The symbolic-consequences argument in the sex robot debate. In: Danaher J, McArthur N (eds) Robot sex: social and ethical implications. MIT Press, Cambridge

    Google Scholar 

  • Danielson P (2002) Artificial morality: virtuous robots for virtual games. Routledge, London

    Google Scholar 

  • Danks D, London AJ (2017) Algorithmic bias in autonomous systems. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (pp. 4691–4697). AAAI Press

  • Dashevsky E (2017) Do robots and AI deserve rights? Pc magazine. https://www.pcmag.com/article/351719/do-robots-and-ai-deserve-rights. Accessed 25 Sept 2019.

  • Dietrich M, Weisswange TH (2019) Distributive justice as an ethical principle for autonomous vehicle behavior beyond hazard scenarios. Ethics Inf Technol. https://doi.org/10.1007/s10676-019-09504-3

    Article  Google Scholar 

  • Faulhaber AK, Dittmer A, Blind F, Wächter MA, Timm S, Sütfeld LR, König P (2019) Human decisions in moral dilemmas are largely described by utilitarianism: virtual car driving study provides guidelines for autonomous driving vehicles. Sci Eng Ethics 25(2):399–418

    Google Scholar 

  • Floridi L, Sanders JW (2004) On the morality of artificial agents. Mind Mach 14(3):349–379

    Google Scholar 

  • Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Schafer B (2018) AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Mind Mach 28(4):689–707

    Google Scholar 

  • Forester T, Morrison P (1991) Computer ethics: cautionary tales and ethical dilemmas in computing. Harvard J Law Technol 4(2):299–305

    Google Scholar 

  • Gamez D (2008) Progress in machine consciousness. Conscious Cogn 17(3):887–910

    Google Scholar 

  • Gershman SJ, Horvitz EJ, Tenenbaum JB (2015) Computational rationality: a converging paradigm for intelligence in brains, minds, and machines. Science 349(6245):273–278

    MathSciNet  MATH  Google Scholar 

  • Goodall NJ (2014) Machine ethics and automated vehicles. In: Meyer G, Beiker S (eds) Road vehicle automation. Springer, Cham, pp 93–102

    Google Scholar 

  • Grau C (2006) There is no “I” in “robot”: robots and utilitarianism. IEEE Intell Syst 21(4):52–55

    Google Scholar 

  • Grodzinsky FS, Miller KW, Wolf MJ (2008) The ethics of designing artificial agents. Ethics Inf Technol 10(2–3):115–121

    Google Scholar 

  • Hajian S, Bonchi F, Castillo C (2016) Algorithmic bias: from discrimination discovery to fairness-aware data mining. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 2125–2126). ACM.

  • Hohfeld WN (1923) Fundamental legal conceptions as applied in judicial reasoning: and other legal essays. Yale University Press, New Haven

    Google Scholar 

  • Howard D, Muntean I (2016) A minimalist model of the artificial autonomous moral agent (AAMA). In 2016 AAAI Spring Symposium Series.

  • Johnson DG (2004) Computer ethics. In: Floridi L (ed) The Blackwell guide to the philosophy of computing and information. Wiley, Oxford, pp 65–75

    Google Scholar 

  • Johnson DG, Miller KW (2008) Un-making artificial moral agents. Ethics Inf Technol 10(2–3):123–133

    Google Scholar 

  • Leben D (2017) A Rawlsian algorithm for autonomous vehicles. Ethics Inf Technol 19(2):107–115

    Google Scholar 

  • Leben D (2018) Ethics for robots: how to design a moral algorithm. Routledge, Abingdon

    Google Scholar 

  • Levesque HJ (1986) Knowledge representation and reasoning. Ann Rev Comput Sci 1(1):255–287

    MathSciNet  Google Scholar 

  • Lewis RL, Howes A, Singh S (2014) Computational rationality: Linking mechanism and behavior through bounded utility maximization. Topics Cognit Sci 6(2):279–311

    Google Scholar 

  • Lin P, Abney K, Bekey GA (2012) The ethical and social implications of robotics. MIT Press, Cambridge

    Google Scholar 

  • Lokhorst GJC (2011) Computational meta-ethics. Minds Mach 21(2):261–274

    Google Scholar 

  • Loukides M (2017) On computational ethics. O’Reilly. https://www.oreilly.com/radar/on-computational-ethics/. Accessed 25 Sept 2019.

  • Lumbreras S (2017) The limits of machine ethics. Religions 8(5) http://doi:10.3390/rel8050100.

  • Mabaso BA (2020) Computationally rational agents can be moral agents. Ethics Inf Technol 24:1–9

    Google Scholar 

  • Malle BF, Scheutz M (2014) Moral competence in social robots. In Proceedings of the IEEE 2014 International Symposium on Ethics in Engineering, Science, and Technology (p. 8), IEEE Press, Piscataway

  • Marino D, Tamburrini G (2006) Learning robots and human responsibility. Int Rev Inf Ethics 6(12):46–51

    Google Scholar 

  • McDermott D (2007) Artificial intelligence and consciousness. In: Zelazo PD, Moscovitch M, Thompson E (eds) The Cambridge handbook of consciousness. Cambridge University Press, Cambridge, pp 117–150

    Google Scholar 

  • McDermott D (2008) Why ethics is a high hurdle for AI. In North American conference on computing and philosophy. Bloomington: https://cs-www.cs.yale.edu/homes/dvm/papers/ethical-machine.pdf

  • Moor JH (1985) What is computer ethics? Metaphilosophy 16(4):266–275

    Google Scholar 

  • Moor JH (1995) Is ethics computable? Metaphilosophy 26(1/2):1–21

    Google Scholar 

  • Moor JH (2006) The nature, importance and difficulty of machine ethics. IEEE Intell Syst 21(4):18–21

    Google Scholar 

  • Moor J (2009) Four kinds of ethical robots. Philosophy Now 72:12–14

    Google Scholar 

  • Müller VC (2019) Ethics of artificial intelligence and robotics. In Edward N. Zalta (ed.), Stanford Encyclopaedia of Philosophy. https://philarchive.org/archive/MLLEOA-4. Accessed 22 Sept 2019.

  • Parthemore J, Whitby B (2014) Moral agency, moral responsibility, and artifacts: what existing artifacts fail to achieve (and why), and why they, nevertheless, can (and do!) make moral claims upon us. Int J Mach Conscious 6(02):141–161

    Google Scholar 

  • Powers TM (2006) Prospects for a Kantian machine. IEEE Intell Syst 21(4):46–51

    Google Scholar 

  • Ramey CH (2005) ‘For the sake of others’: The ‘personal’ ethics of human-android interaction. Cognitive Science Society, Stresa, pp 137–148

    Google Scholar 

  • Reggia JA (2013) The rise of machine consciousness: Studying consciousness with computational models. Neural Networks 44:112–131

    Google Scholar 

  • Rodd MG (1995) Safe AI—is this possible? Eng Appl Artif Intell 8(3):243–250

    Google Scholar 

  • Russell S, Hauert S, Altman R, Veloso M (2015) Ethics of artificial intelligence. Nature 521(7553):415–416

    Google Scholar 

  • Ruvinsky AI (2007) Computational ethics. In: Quigley M (ed) Encyclopaedia of information ethics and security. IGI Global, Hershey, pp 76–82

    Google Scholar 

  • Sauer F (2016) Stopping’Killer Robots’: why now is the time to ban autonomous weapons systems. Arms Control Today 46(8):8–13

    Google Scholar 

  • Shachter RD, Kanal LN, Henrion M, Lemmer JF (eds) (2017) Uncertainty in artificial intelligence 5 (Vol. 10). Elsevier, Amsterdam

    Google Scholar 

  • Smith A, Anderson J (2014) AI, Robotics, and the future of jobs. Pew Research Center, p 6.

  • Sparrow R, Sparrow L (2006) In the hands of machines? The future of aged care. Mind Mach 16:141–161

    Google Scholar 

  • Starzyk JA, Prasad DK (2011) A computational model of machine consciousness. Int J Mach Conscious 3(02):255–281

    Google Scholar 

  • Sullins JP (2012) Robots, love, and sex: the ethics of building a love machine. IEEE Trans Affect Comput 3(4):398–409

    Google Scholar 

  • Tavani HT (2002) The uniqueness debate in computer ethics: what exactly is at issue, and why does it matter? Ethics Inf Technol 4(1):37–54

    Google Scholar 

  • Torrance S (2008) Ethics and consciousness in artificial agents. AI Soc 22(4):495–521

    Google Scholar 

  • Torrance S (2013) Artificial agents and the expanding ethical circle. AI Soc 28(4):399–414

    Google Scholar 

  • Turkle S (2006) A nascent robotics culture: new complicities for companionship. American Association for Artificial Intelligence Technical Report Series AAAI. https://www.aaai.org/Library/Workshops/2006/ws06-09-010.php. Accessed 22 Sept 2019.

  • Vallor S (2011) Carebots and caregivers: sustaining the ethical ideal of care in the twenty-first century. Philos Technol 24(3):251

    Google Scholar 

  • Van de Voort M, Pieters W, Consoli L (2015) Refining the ethics of computer-made decisions: a classification of moral mediation by ubiquitous machines. Ethics Inf Technol 17(1):41–56

    Google Scholar 

  • Van den Hoven J (2010) The use of normative theories in computer ethics. In: Floridi L (ed) The Cambridge handbook of information and computer ethics. Cambridge University Press, Cambridge, pp 59–76

    Google Scholar 

  • Veruggio G, Operto F (2006) Roboethics: a bottom-up interdisciplinary discourse in the field of applied ethics in robotics. Int Rev Inf Ethics 6(12):2–8

    Google Scholar 

  • Waldrop MM (1987) A question of responsibility. AI Mag 8(1):28–28

    Google Scholar 

  • Wallach W, Allen C (2008) Moral machines: teaching robots right from wrong. Oxford University Press, Oxford

    Google Scholar 

  • Wallach W, Allen C (2012) Hard problems: framing the Chinese room in which a robot takes a moral Turing test. https://wendellwallach.com/wordpress/wp-content/uploads/2013/10/Hard-Problems-AISB-IACAP2012-Wallach-and-Allen.pdf. Accessed 25 Sept 2019.

  • Wallach W, Asaro P (2017) Machine ethics and robot ethics. Routledge, New York

    Google Scholar 

  • Wallach W, Franklin S, Allen C (2010) A conceptual and computational model of moral decision making in human and artificial agents. Topics Cognit Sci 2(3):454–485

    Google Scholar 

  • Yampolskiy RV (2012) Artificial intelligence safety engineering: Why machine ethics is a wrong approach. In: Müller VC (ed) Philosophy and theory of artificial intelligence. Springer, Berlin, pp 389–396

    Google Scholar 

Download references

Acknowledgements

I would like to thank Prof Thaddeus Metz and Prof Emma Ruttkamp-Bloem, who both took the time to read the initial drafts of this paper and made significant comments and suggestions. I would also like to thank my research group members at the Centre for Artificial Intelligence Research, University of Pretoria.

Funding

None.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Samuel T. Segun.

Ethics declarations

Conflict of interest

The author declares no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Segun, S.T. From machine ethics to computational ethics. AI & Soc 36, 263–276 (2021). https://doi.org/10.1007/s00146-020-01010-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-020-01010-1

Keywords

Navigation