Abstract
We propose that the prevalent moral aversion to AWS is supported by a pair of compelling
objections. First, we argue that even a sophisticated robot is not the kind of thing that is
capable of replicating human moral judgment. This conclusion follows if human moral
judgment is not codifiable, i.e., it cannot be captured by a list of rules. Moral judgment
requires either the ability to engage in wide reflective equilibrium, the ability to perceive certain facts as moral considerations, moral imagination, or the ability to have moral experiences with a particular phenomenological character. Robots cannot in principle possess these abilities, so robots cannot in principle replicate human moral judgment. If robots cannot in principle replicate human moral judgment then it is morally problematic to deploy AWS with
that aim in mind. Second, we then argue that even if it is possible for a sufficiently
sophisticated robot to make ‘moral decisions’ that are extensionally indistinguishable from
(or better than) human moral decisions, these ‘decisions’ could not be made for the right
reasons. This means that the ‘moral decisions’ made by AWS are bound to be morally deficient
in at least one respect even if they are extensionally indistinguishable from human ones. Our
objections to AWS support the prevalent aversion to the employment of AWS in war. They
also enjoy several significant advantages over the most common objections to AWS in the
literature.