Understanding Moral Responsibility in Automated Decision-Making: Responsibility Gaps and Strategies to Address Them

Theoria: Beograd (forthcoming)
  Copy   BIBTEX

Abstract

This paper delves into the use of machine learning-based systems in decision-making processes and its implications for moral responsibility as traditionally defined. It focuses on the emergence of responsibility gaps and examines proposed strategies to address them. The paper aims to provide an introductory and comprehensive overview of the ongoing debate surrounding moral responsibility in automated decision-making. By thoroughly examining these issues, we seek to contribute to a deeper understanding of the implications of AI integration in society.

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Agency Laundering and Information Technologies.Alan Rubel, Clinton Castro & Adam Pham - 2019 - Ethical Theory and Moral Practice 22 (4):1017-1041.

Analytics

Added to PP
2024-05-31

Downloads
0

6 months
0

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Andrea Berber
University of Belgrade
Jelena Mijić
University of Belgrade

Citations of this work

No citations found.

Add more citations

References found in this work

Responsibility From the Margins.David Shoemaker - 2015 - Oxford, GB: Oxford University Press.
Two Faces of Responsibility.Gary Watson - 1996 - Philosophical Topics 24 (2):227-248.
Robots, Law and the Retribution Gap.John Danaher - 2016 - Ethics and Information Technology 18 (4):299–309.

View all 27 references / Add more references