Skip to main content
Log in

Moral difference between humans and robots: paternalism and human-relative reason

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

According to some philosophers, if moral agency is understood in behaviourist terms, robots could become moral agents that are as good as or even better than humans. Given the behaviourist conception, it is natural to think that there is no interesting moral difference between robots and humans in terms of moral agency (call it the equivalence thesis). However, such moral differences exist: based on Strawson’s account of participant reactive attitude and Scanlon’s relational account of blame, I argue that a distinct kind of reason available to humans—call it human-relative reason—is not available to robots. The difference in moral reason entails that sometimes an action is morally permissible for humans, but not for robots. Therefore, when developing moral robots, we cannot consider only what humans can or cannot do. I use examples of paternalism to illustrate my argument.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. One may worry that behaviourism seems to imply that natural events are moral agents since their effects are morally assessable. However, this worry is not serious to me. First, this is a challenge to behaviourism; my aim is to criticise the equivalence thesis that is rendered plausible by behaviourism (see below). Note that behaviourism is a popular thesis in robot ethics. Even if behaviourism itself is problematic, examining it from different perspectives remains worthwhile. Second, the equivalence thesis is not about any moral agent, but about the kind of moral agents that can perform as morally as humans. So, even if natural events are considered moral agents, they are not the targets of this paper.

  2. For an account of personhood that requires complex mental properties, see Lynne Rudder Baker (2000). Note that my view implies that the kind of moral robots under discussion here, despite being moral agents, are not moral patients (or are moral patients of much lower status than persons or sentient beings).

  3. Here I’m discussing the ontological issue of whether moral robots are persons, not a psychological, sociological or legal issue of whether they will or should be recognised as such. It is possible that a non-person object is often perceived as a person. For example, pet owners often treat their pets as if they are persons. I discuss how this phenomenon affects my thesis below.

  4. In the film, one robot is created to have self-consciousness. I assume that it would be a person so that it is not the kind of artificial moral agency under discussion here. I refer to its numerous predecessors, which lack personhood and are treated as such.

  5. Autonomous robots still act under humans’ directions in the sense that they are designed by humans. I discuss how this fact could affect my thesis later.

  6. Agent-neutral reason contrasts with agent-relative reason. In the example, the welfarist reason and the autonomy reason are agent-neutral because they can be specified without reference to the agents who perform the act. However, the fact that Mary is James’ mother is an agent-relative reason because it is specified with reference to the agent, Mary.

  7. Philosophers use the notion of pro tanto reason in this sense: to say that R is a pro tanto reason in favour of an action x is to say that R, considered on its own, can justify doing x. When we determine whether to do x, we need to weigh all relevant pro tanto reasons for and against doing x. If pro tanto reasons against doing x are stronger than the reasons for doing x, then doing x is not justified—but it remains true that they are pro tanto reasons for doing x. In other words, if there is a pro tanto reason in favour of doing x, doing x is, other things equal, justified, but it could be unjustified all-things-considered.

  8. Several accounts of blameworthiness (Capes 2012; Graham 2014) argue that a morally permissible action may nevertheless be blameworthy, which can support that Spooner’s blame is justified.

  9. I am grateful for the reviewers for raising the following objections.

  10. Coeckelbergh (2011) and Tavani (2014) argue that we can trust robots and the trust relationship between humans and robots could be the default. This may seem in conflict with my thesis. However, the conflict is merely apparent. As they indicate, we can trust non-personal entities, such as social institutions or machines. Surely, the fact that I can trust my car being reliable enough to last for the whole trip does not show that the relationship is interpersonal. Thus, the reason is different from human-relative reason. Furthermore, the fact that we can by default trust robots does not in itself give a reason that robots can interfere with our autonomy. For example, the fact that I trust a doctor is not a reason that the doctor can violate my autonomy concerning my health, unless I choose to be her patient.

  11. See Sharkey and Sharkey (2010) for a more substantial appraisal of childcare robots. While they list several advantages, they conclude that a near-exclusive care of children by robots is undesirable.

References

Download references

Acknowledgements

This paper is funded by Ministry of Science and Technology, Taiwan (107-2420-H-002-007-MY3-V10701). I would like to thank Ser-Min Shei, Linton Wang, Richard Hou, Cheng-Hung Tsai, Hua Wang, Chi-Chun Chiu, Karen Yan, Ying-Tung Lin, Lok-Chi Chan, Jhih–Hao Jhang, and Kris Chu for their helpful feedback. I am very grateful to the reviewers for their constructive and insightful suggestions and comments.

Funding

MOST107-2420-H-002-007-MY3-V10701.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tsung-Hsing Ho.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ho, TH. Moral difference between humans and robots: paternalism and human-relative reason. AI & Soc 37, 1533–1543 (2022). https://doi.org/10.1007/s00146-021-01231-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-021-01231-y

Keywords

Navigation