Outline of a sensory-motor perspective on intrinsically moral agents

Adaptive Behavior 24 (5):306-319 (2016)
  Copy   BIBTEX

Abstract

We propose that moral behaviour of artificial agents could be intrinsically grounded in their own sensory-motor experiences. Such an ability depends critically on seven types of competencies. First, intrinsic morality should be grounded in the internal values of the robot arising from its physiology and embodiment. Second, the moral principles of robots should develop through their interactions with the environment and with other agents. Third, we claim that the dynamics of moral emotions closely follows that of other non-social emotions used in valuation and decision making. Fourth, we explain how moral emotions can be learned from the observation of others. Fifth, we argue that to assess social interaction, a robot should be able to learn about and understand responsibility and causation. Sixth, we explain how mechanisms that can learn the consequences of actions are necessary for a robot to make moral decisions. Seventh, we describe how the moral evaluation mechanisms outlined can be extended to situations where a robot should understand the goals of others. Finally, we argue that these competencies lay the foundation for robots that can feel guilt, shame and pride, that have compassion and that know how to assign responsibility and blame.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,612

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2020-01-13

Downloads
24 (#647,262)

6 months
10 (#384,931)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Christian Balkenius
Lund University

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references