Abstract
According to some philosophers, if moral agency is understood in behaviourist terms, robots could become moral agents that are as good as or even better than humans. Given the behaviourist conception, it is natural to think that there is no interesting moral difference between robots and humans in terms of moral agency (call it the equivalence thesis). However, such moral differences exist: based on Strawson’s account of participant reactive attitude and Scanlon’s relational account of blame, I argue that a distinct kind of reason available to humans—call it human-relative reason—is not available to robots. The difference in moral reason entails that sometimes an action is morally permissible for humans, but not for robots. Therefore, when developing moral robots, we cannot consider only what humans can or cannot do. I use examples of paternalism to illustrate my argument.
Similar content being viewed by others
Notes
One may worry that behaviourism seems to imply that natural events are moral agents since their effects are morally assessable. However, this worry is not serious to me. First, this is a challenge to behaviourism; my aim is to criticise the equivalence thesis that is rendered plausible by behaviourism (see below). Note that behaviourism is a popular thesis in robot ethics. Even if behaviourism itself is problematic, examining it from different perspectives remains worthwhile. Second, the equivalence thesis is not about any moral agent, but about the kind of moral agents that can perform as morally as humans. So, even if natural events are considered moral agents, they are not the targets of this paper.
For an account of personhood that requires complex mental properties, see Lynne Rudder Baker (2000). Note that my view implies that the kind of moral robots under discussion here, despite being moral agents, are not moral patients (or are moral patients of much lower status than persons or sentient beings).
Here I’m discussing the ontological issue of whether moral robots are persons, not a psychological, sociological or legal issue of whether they will or should be recognised as such. It is possible that a non-person object is often perceived as a person. For example, pet owners often treat their pets as if they are persons. I discuss how this phenomenon affects my thesis below.
In the film, one robot is created to have self-consciousness. I assume that it would be a person so that it is not the kind of artificial moral agency under discussion here. I refer to its numerous predecessors, which lack personhood and are treated as such.
Autonomous robots still act under humans’ directions in the sense that they are designed by humans. I discuss how this fact could affect my thesis later.
Agent-neutral reason contrasts with agent-relative reason. In the example, the welfarist reason and the autonomy reason are agent-neutral because they can be specified without reference to the agents who perform the act. However, the fact that Mary is James’ mother is an agent-relative reason because it is specified with reference to the agent, Mary.
Philosophers use the notion of pro tanto reason in this sense: to say that R is a pro tanto reason in favour of an action x is to say that R, considered on its own, can justify doing x. When we determine whether to do x, we need to weigh all relevant pro tanto reasons for and against doing x. If pro tanto reasons against doing x are stronger than the reasons for doing x, then doing x is not justified—but it remains true that they are pro tanto reasons for doing x. In other words, if there is a pro tanto reason in favour of doing x, doing x is, other things equal, justified, but it could be unjustified all-things-considered.
I am grateful for the reviewers for raising the following objections.
Coeckelbergh (2011) and Tavani (2014) argue that we can trust robots and the trust relationship between humans and robots could be the default. This may seem in conflict with my thesis. However, the conflict is merely apparent. As they indicate, we can trust non-personal entities, such as social institutions or machines. Surely, the fact that I can trust my car being reliable enough to last for the whole trip does not show that the relationship is interpersonal. Thus, the reason is different from human-relative reason. Furthermore, the fact that we can by default trust robots does not in itself give a reason that robots can interfere with our autonomy. For example, the fact that I trust a doctor is not a reason that the doctor can violate my autonomy concerning my health, unless I choose to be her patient.
See Sharkey and Sharkey (2010) for a more substantial appraisal of childcare robots. While they list several advantages, they conclude that a near-exclusive care of children by robots is undesirable.
References
Allen C, Varner G, Zinser J (2000) Prolegomena to any future artificial moral agent. J Exp Theor Artif Intell 12(3):251–261. https://doi.org/10.1080/09528130050111428
Alvarez, Maria (2017) Reasons for action: justification, motivation, explanation. In: Zalta EN (ed) The stanford encyclopedia of philosophy, Metaphysics Research Lab, Stanford University
Baker LR (2000) Persons and bodies: a constitution view. Cambridge University Press
Beavers AF (2012) Moral Machines and the Threat of Ethical Nihilism. In: Lin P, Bekey G, Abney K (eds) Robot ethics: the ethical and social implication of robotics. The MIT Press, Cambridge, MA, pp 333–344
Bostrom N (2014) Superintelligence: paths, dangers, strategies. Oxford University Press, Oxford
Brożek B, Janik B (2019) Can artificial intelligences be moral agents? New Ideas Psychol 54:101–106. https://doi.org/10.1016/j.newideapsych.2018.12.002
Capes JA (2012) Blameworthiness without Wrongdoing. Pac Philos Q 93(3):417–437. https://doi.org/10.1111/j.1468-0114.2012.01433.x
Coates, D. Justin, and Neal A. Tognazzini (2018) Blame. In: Zalta EN (ed) The stanford encyclopedia of philosophy, Metaphysics Research Lab, Stanford University
Coeckelbergh M (2011) Can we trust robots? Ethics Inf Technol 14(1):53–60. https://doi.org/10.1007/s10676-011-9279-1
Dietrich E (2007) After the humans are gone Douglas Engelbart keynote address, North American computers and philosophy conference rensselaer polytechnic institute, August 2006. J Exp Theor Artif Intell 19(1):55–67. https://doi.org/10.1080/09528130601115339
Dietrich E (2011) Homo Sapiens 2.0. In: Anderson M, Anderson SL (eds) Machine ethics. Cambridge University Press, Cambridge, pp 531–538
Dworkin, Gerald (2020). Paternalism. In: Zalta EN (ed) Stanford encyclopedia of philosophy, Metaphysics Research Lab, Stanford University
Epley K (2019) Emotions, attitudes, and reasons. Pac Philos Q 100(1):256–282. https://doi.org/10.1111/papq.12242
Floridi L, Sanders JW (2004) On the morality of artificial agents. Mind Mach 14(3):349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d
Fossa F (2018) Artificial moral agents: moral mentors or sensible tools? Ethics Inf Technol 20(2):115–126. https://doi.org/10.1007/s10676-018-9451-y
Graham PA (2014) A sketch of a theory of moral blameworthiness. Res 88(2):388–409. https://doi.org/10.1111/j.1933-1592.2012.00608.x
Grodzinsky FS, Miller KW, Wolf MJ (2008) The ethics of designing artificial agents. Ethics Inf Technol 10(2–3):115–121. https://doi.org/10.1007/s10676-008-9163-9
Gunkel DJ (2012) The machine question: critical perspectives on AI, robots, and ethics. MIT Press
Hakli R, Mäkelä P (2019) Moral responsibility of robots and hybrid agents. Monist 102(2):259–275. https://doi.org/10.1093/monist/onz009
Hall JS (2011) Ethics for self-improving machines. In: Anderson M, Anderson SL (eds) Machine ethics. Cambridge University Press, Cambridge, pp 512–523
Himma KE (2009) Artificial agency, consciousness, and the criteria for moral agency: what properties must an artificial agent have to be a moral agent? Ethics Inf Technol 11(1):19–29. https://doi.org/10.1007/s10676-008-9167-5
Johnson DG, Verdicchio M (2019) AI, agency and responsibility: the VW fraud case and beyond. AI Soc 34:639–647. https://doi.org/10.1007/s00146-017-0781-9
Laukyte M (2016) Artificial agents among us: should we recognize them as agents proper? Ethics Inf Technol 19(1):1–17. https://doi.org/10.1007/s10676-016-9411-3
Proyas A (2004) I, robot. 20th Century Fox, United States
Scanlon TM (2008) Moral dimensions: permissibility, meaning, blame. Belknap Press, Cambridge, MA
Sharkey N, Sharkey A (2010) The crying shame of robot nannies. Interact Stud. Soc Behav Commun Biol Artif Syst 11(2):161–190. https://doi.org/10.1075/is.11.2.01sha
Strawson PF (1974) Freedom and resentment and other essays. Routledge, London
Tappolet C (2016) Emotions, value, and agency. Oxford University Press, Oxford
Tavani HT (2014) Levels of trust in the context of machine ethics. Philos Technol 28(1):75–90. https://doi.org/10.1007/s13347-014-0165-8
Tegmark M (2017) Life 3.0: being human in the age of artificial intelligence. Knopf, New York
Watson G (2014) Peter Strawson on responsibility and sociality. In: Shoemaker D, Tognazzini N (eds) Oxford studies in agency and responsibility. Oxford University Press, Oxford, pp 15–32
Yudkowsky E (2008) Artificial intelligence as a positive and negative factor in global risk. In: Nick B, Ćirković MM (eds) Global catastrophic risks. Oxford University Press, Oxford, pp 308–345
Acknowledgements
This paper is funded by Ministry of Science and Technology, Taiwan (107-2420-H-002-007-MY3-V10701). I would like to thank Ser-Min Shei, Linton Wang, Richard Hou, Cheng-Hung Tsai, Hua Wang, Chi-Chun Chiu, Karen Yan, Ying-Tung Lin, Lok-Chi Chan, Jhih–Hao Jhang, and Kris Chu for their helpful feedback. I am very grateful to the reviewers for their constructive and insightful suggestions and comments.
Funding
MOST107-2420-H-002-007-MY3-V10701.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Ho, TH. Moral difference between humans and robots: paternalism and human-relative reason. AI & Soc 37, 1533–1543 (2022). https://doi.org/10.1007/s00146-021-01231-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-021-01231-y