Some Technical Challenges in Designing an Artificial Moral Agent
Abstract
Autonomous agents (robots) are no longer a subject of science fiction novels. Self-driving cars, for example, may be on our roads within a few years. These machines will necessarily interact with the humans and in these interactions must take into account moral outcome of their actions. Yet we are nowhere near designing a machine capable of autonomous moral reasoning. In some sense, this is understandable as commonsense reasoning turns out to be very hard to formalize.
In this paper, we identify several features of commonsense reasoning that are specific to the domain of morality. We show that its peculiarities, such as, moral conflicts or priorities among norms, give rise to serious challenges for any logical formalism representing moral reasoning. We then present a variation of default logic adapted from [5] and show how it addresses the problems we identified.