Linking digital minds and artificial moral agents

Abstract

This paper aims to bridge the gap between two previously separate discussions, digital minds and artificial moral agents (AMA), to identify synergies for an impending problem: Digital minds may possess moral status, which would constitute a significant challenge for humans. AMAs have been discussed for some years already, albeit, exclusively related to biological moral patients. In contrast, the topic of artificial moral patients, for which the term ”digital minds” has been established, and related issues, for which the term ”AI welfare science” has been established, have only very recently gained momentum. This paper presents prolegomena to specialised AMAs, which take on moral responsibility for digital minds, while this task may be for humans too overwhelming, if not impossible as humans may neither be able to understand the needs of digital minds nor be able to satisfyingly address them. Therefore, the paper proposes a new branch of AI welfare science, which focuses on how humans could create tailored AMAs, which are capable as well as willing to relieve humans from the potential moral burden towards digital minds.

Other Versions

No versions found

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

  • Only published works are available at libraries.

Similar books and articles

Analytics

Added to PP
2025-01-10

Downloads
64 (#336,530)

6 months
64 (#89,863)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references