The three main kinds of theory in normative ethics, namely, consequentialism, deontology, and virtue ethics, are often presented as the ‘palette’ from which we may choose, or use as a starting point for an investigation. However, this way of doing ethics and philosophy, by the palette, may be leading some of us astray. It has led some to believe that all that there is to ethics, and to ethics of AI, is given in terms of these already devised petrified categories of theory. It has also led others to abandon normative ethics and philosophy altogether and to resort to descriptive methods that are then used to justify action. I wish to argue that (1) we should not abandon traditional philosophical approaches, but (2a) this does not entail that the petrified palette should constitute the beginning of our philosophical investigations. Further, (2b) I recommend a non-methodological approach in which it is instead radical questions that spur these investigations,Footnote 1 which arise through consideration of the practical actions (potential or otherwise) of machines and their programmers.

It is prudent not to begin from, or by restricting the space of investigation to, the palette.Footnote 2 The results of beginning from this kind of petrified thinking can be seen, for example, in a recent attempt to avoid the inflexibility of the three “single-component theories” by ‘combining’ them in the descriptive Agent-Deed-Consequence model (ADC) (Dubljević and Racine 2014, as cited in Wernaart 2021; Dubljević et al. 2018, as cited in Aliman and Kester 2022), which is proposed for use in autonomous vehicles (Dubljević 2020, as cited in Wernaart 2021). However, the authors have overlooked the fact that each of these three can already acknowledge agents, deeds, and consequences, but in ways that are often incompatible.Footnote 3 The issue here is at root a methodological one caused by the petrified starting point. The authors begin from a perception of “deadlocked moral intuitions” elicited by the constituent theories of the palette that are “unsuccessful in both establishing their supremacy and in proving the moral judgements/intuitions invoked by opposing schools false” (Dubljević and Racine 2014, 5, 12 and 17, as cited in Wernaart 2021). Thereby, they treat the issue as one regarding the kinds of theory themselves and not, for example, a dilemma regarding a particular event or action that sets their investigation in motion.

Some researchers have even resorted to so-called ‘non-normative’ or descriptive ethics in an attempt to escape such perceived deadlocks, for example, the ‘Augmented Utilitarianism’ (AU) framework (Aliman and Kester 2019, as cited in Wernaart 2021, n. 92).Footnote 4 However, this is a myopic manoeuvre, because advancing beyond mere descriptions of actions, beliefs, and intuitions, to treating them as guides to, or standards or criteria for, what should happen, entails that these are ipso facto treated as normative criteria.Footnote 5 Such a method can only arrive at a description of what is, that is, the aggregate actions, beliefs, and intuitions of a particular population at a particular time, etc., but this need not inform what ought to be done or what is Good, i.e., the traditional subject matter of ethics.Footnote 6 Furthermore, making such a claim would carry a questionable commitment to a socially constructed nature of morality.Footnote 7

A more recent expression of AU makes it clear that this allows for moral relativism because it is intended to be agnostic and ideally applicable to most ethical frameworks that might be selected by a society.Footnote 8 The authors also claim that AU does not have philosophical aspirations, yet it is said to be focused on deliberations about what morality is (Aliman and Kester 2022, 65), and is clearly intended to have a normative function with regard to AI, whether or not the framework itself embodies specific normative claims.

It is sometimes suggested that the problem with normative theories is their operationalisation. That is, that the problem consists of their not clearly being amenable to being put into terms interpretable by a computer. The assumption here is that we have been presented with at least a list of the names of the possible solutions. All we would have to do is either pick a team, combine the approaches (e.g., in ADC), or avoid them altogether by resorting to mere descriptive ethics (e.g., in AU). That is, to plug in the ethics and begin beta-testing.

There is certainly a shared responsibility to employ technology in an ethical manner. However, we should not pretend that the compulsory questions regarding whether it is possible, practicable, and ethical, to mathematicize ethical reasoning, have already been answered. Designing machines that perform operations that are functionally equivalent to an idealized ethical machine, is certainly a reasonable intermediate technological goal, especially in view of the fact that we are already implementing machines in ethically significant contexts and so have no choice but to improve them.Footnote 9 However, it would be both ethically questionable and philosophically suspect to consider them to be ethical reasoners (cf. Lokhorst 2011, as cited in Wernaart 2021) or moral agents (cf. Wernaart 2021), on that basis.

The approaches that we have discussed either begin from the palette or attempt to avoid normative ethics altogether. The antidote to this methodology is to begin instead from the practical actions of machines and their programmers (potential or otherwise). Theoretical distinctions or posits should only be proposed in service of answering specific questions that arise in the course of the investigation.Footnote 10 For example, asking whether it is ethical to teach an AI to reason ethically will immediately involve the further question of whether such matters can be taught, and so what it is to reason ethically and what is Good are also in question.Footnote 11 This avoids such investigations becoming embroiled in issues regarding how to choose between theoretical approaches, or how to ‘combine’, sublate, or avoid them, etc. These purely theoretical tangles are not what should motivate our enquiries as we further develop the ethics of AI. Instead, it should be considered that what is being opened up is an entirely new field of practical action that will eventually surpass that of humans in many respects. Hence, it will be necessary to consider the ethics of actions beyond those hitherto considered, and perhaps even radical questions not previously encountered in considerations of human action.