Abstract
In this paper we analyse the conditions for attributing to AI autonomous systems the ontological status of “artificial moral agents”, in the context of the “distributed responsibility” between humans and machines in Machine Ethics (ME). In order to address the fundamental issue in ME of the unavoidable “opacity” of their decisions with ethical/legal relevance, we start from the neuroethical evidence in cognitive science. In humans, the “transparency” and then the “ethical accountability” of their actions as responsible moral agents is not in contradiction with the unavoidable “opacity” (unawareness) of the brain process by which they perform their moral judgements on the right action to execute. In fact, the moral accountability of our actions depends on what is immediately before and after our “moral judgements” on the right action to execute (formally, deontic first order logic (FOL) decisions). I.e., our moral accountability depends on the “ethical constraints” we imposed to our judgement before performing it in an opaque way. Anyway, our moral accountability depends overall on the “ethical assessment” or explicit “moral reasoning” after and over the moral judgement before executing our actions (deontic higher order logic (HOL) assessment). In this way, in the light of the AI “imitation game”, the consistent attribution of the status of ethically accountable artificial moral agents to autonomous AI systems depends on two similar conditions. Firstly, it depends on the presence of “ethical constraints” to be satisfied in their Machine Learning (ML) supervised optimization algorithm during its training phase, to give the system ethical skills (“competences”) in its decisions. Secondly – and definitely—, it depends on the presence in an AI autonomous system of a deontic HOL “ethical reasoner” to perform an automatic, and fully transparent assessment (metalogical deontic valuation) about the decisions taken by the ethically skilled ML algorithm about the right action to execute, before executing it. Finally, we show that the proper deontic FOL and HOL for this class of artificial moral agents is Kripke’s modal relational logic, in its algebraic topological formalization. This is naturally implemented in the dissipative QFT unsupervised deep learning of our brains, based on the “doubling of the degrees of freedom” (DDF), and then in the so-called “deep-belief” artificial neural networks for the statistical data pre-processing. This unsupervised learning procedure is also compliant with the usage of the “maximin fairness principle”, used as a balancing aggregation principle of the statistical variables in Sen’s formal theory of fairness.