Deep Learning Opacity, and the Ethical Accountability of AI Systems. A New Perspective

In Raffaela Giovagnoli & Robert Lowe (eds.), The Logic of Social Practices II. Springer Nature Switzerland. pp. 21-73 (2023)
  Copy   BIBTEX

Abstract

In this paper we analyse the conditions for attributing to AI autonomous systems the ontological status of “artificial moral agents”, in the context of the “distributed responsibility” between humans and machines in Machine Ethics (ME). In order to address the fundamental issue in ME of the unavoidable “opacity” of their decisions with ethical/legal relevance, we start from the neuroethical evidence in cognitive science. In humans, the “transparency” and then the “ethical accountability” of their actions as responsible moral agents is not in contradiction with the unavoidable “opacity” (unawareness) of the brain process by which they perform their moral judgements on the right action to execute. In fact, the moral accountability of our actions depends on what is immediately before and after our “moral judgements” on the right action to execute (formally, deontic first order logic (FOL) decisions). I.e., our moral accountability depends on the “ethical constraints” we imposed to our judgement before performing it in an opaque way. Anyway, our moral accountability depends overall on the “ethical assessment” or explicit “moral reasoning” after and over the moral judgement before executing our actions (deontic higher order logic (HOL) assessment). In this way, in the light of the AI “imitation game”, the consistent attribution of the status of ethically accountable artificial moral agents to autonomous AI systems depends on two similar conditions. Firstly, it depends on the presence of “ethical constraints” to be satisfied in their Machine Learning (ML) supervised optimization algorithm during its training phase, to give the system ethical skills (“competences”) in its decisions. Secondly – and definitely—, it depends on the presence in an AI autonomous system of a deontic HOL “ethical reasoner” to perform an automatic, and fully transparent assessment (metalogical deontic valuation) about the decisions taken by the ethically skilled ML algorithm about the right action to execute, before executing it. Finally, we show that the proper deontic FOL and HOL for this class of artificial moral agents is Kripke’s modal relational logic, in its algebraic topological formalization. This is naturally implemented in the dissipative QFT unsupervised deep learning of our brains, based on the “doubling of the degrees of freedom” (DDF), and then in the so-called “deep-belief” artificial neural networks for the statistical data pre-processing. This unsupervised learning procedure is also compliant with the usage of the “maximin fairness principle”, used as a balancing aggregation principle of the statistical variables in Sen’s formal theory of fairness.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,745

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Analytics

Added to PP
2023-12-29

Downloads
103 (#52,418)

6 months
103 (#166,205)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Gianfranco Basti
Pontifical Lateran University

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references