Abstract
How should human beings and robots interact with one another? Nyholm’s
answer to this question is given below in the form of a conditional:
If a robot looks or behaves like an animal or a human being then we
should treat them with a degree of moral consideration (p. 201).
Although this is not a novel claim in the literature on ai ethics, what is new is
the reason Nyholm gives to support this claim; we should treat robots that look
like human or non-human animals with a certain degree of moral restraint
out of respect for human beings or other beings with moral status. Although
Danaher or Coeckelbergh also claim that we should treat robots with a degree
of moral consideration, the reasons they give for making this claim focus on
duties or rights attaching to the robot themselves (see J. Danaher, “Welcoming
Robots into the Moral Circle: A Defence of Ethical Behaviourism,” Science
and Engineering Ethics, (2019): 1–27 or M. Coeckelbergh, “Moral Appearances:
Emotions, Robots and Human Morality,” Ethics and Information Technology,
12(3) (2010): 235–241.). Nyholm disagrees with this type of reasoning and
claims that until robots develop a human or animal like inner life, we have no
direct duties to the robots themselves. Rather, it is out of respect for human
beings or other beings with moral status that we should treat some robots
with moral restraint. Gerdes, similarly inspired by Kant, focuses on the human
agent to argue that we should avoid treating robots in cruel ways because this
may corrupt the human agent’s character (see A. Gerdes, “The Issue of Moral
Consideration in Robot Ethics,” siggas Computers and Society, 45(3) (2015):
274–279.). Nyholm’s contribution here is to extend this view such that the corruption
or harm being done is against the humanity in all of us.