Some Technical Challenges in Designing an Artificial Moral Agent

In Artificial Intelligence and Soft Computing. ICAISC 2020. Lecture Notes in Computer Science, vol 12416. Springer. pp. 481-491 (2020)
  Copy   BIBTEX

Abstract

Autonomous agents (robots) are no longer a subject of science fiction novels. Self-driving cars, for example, may be on our roads within a few years. These machines will necessarily interact with the humans and in these interactions must take into account moral outcome of their actions. Yet we are nowhere near designing a machine capable of autonomous moral reasoning. In some sense, this is understandable as commonsense reasoning turns out to be very hard to formalize. In this paper, we identify several features of commonsense reasoning that are specific to the domain of morality. We show that its peculiarities, such as, moral conflicts or priorities among norms, give rise to serious challenges for any logical formalism representing moral reasoning. We then present a variation of default logic adapted from [5] and show how it addresses the problems we identified.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,745

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2023-06-29

Downloads
0

6 months
0

Historical graph of downloads

Sorry, there are not enough data points to plot this chart.
How can I increase my downloads?

Author's Profile

Jarek Gryz
York University

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references