Skip to main content
Log in

A qualified defense of top-down approaches in machine ethics

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

This paper concerns top-down approaches in machine ethics. It is divided into three main parts. First, I briefly describe top-down design approaches, and in doing so I make clear what those approaches are committed to and what they involve when it comes to training an AI to behave ethically. In the second part, I formulate two underappreciated motivations for endorsing them, one relating to predictability of machine behavior and the other relating to scrutability of machine decision-making. Finally, I present three major worries about such approaches, and I attempt to show that advocates of top-down approaches have some plausible avenues of response. I focus most of my attention on what I call the ‘technical manual objection’ to top-down approaches, inspired by the work of Annas (2004). In short, the idea is that top-down approaches treat ethical decision-making as being merely a matter of following some ethical instructions in the same way that one might follow some set of instructions contained in a technical manual (e.g., computer manual), and this invites sensible skepticism about the ethical wisdom of machines that have been trained on those approaches. I respond by claiming that the objection is successful only if it is understood as targeting machines that have certain kinds of goals, and it should not compel us to totally abandon top-down approaches. Such approaches could still be reasonably employed to design ethical AI that operate in contexts that include fairly noncontroversial answers to ethical questions. In fact, we should prefer top-down approaches when it comes to those types of context, or so I argue, due to the advantages I claim for them.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data availability

Not applicable.

Notes

  1. Another helpful statement is provided by Cervantes et al. (2020: 511): “Agents based on top-down strategy contain ethical rules derived commonly from a specific ethical theory. This ethical theory is basically the criterion used by the AMA to make ethical decisions. Thus, these ethical agents or AMAs are capable of deriving their behavior for particular cases from a specific ethical theory.” Both of these ways of characterizing top-down approaches might be somewhat misleading, though, since they could be construed as suggesting that the moral principles themselves are to be used by or contained within the AI systems.

  2. Here we can observe already that a machine might carry out its ethical decision-making in a different way than humans typically do, for example, by routinely following some algorithmic decision procedure in every case. To what extent this is a problem for top-down approaches will be the topic of discussion later on.

  3. Consider, for example, Cloos’s (2005) Utilibot or Anderson and Anderson’s (2008) Jeremy.

  4. Two notable exceptions here. Allen et al. (2000: 260) claim that top-down approaches are “safer” than some other sorts of approaches “in that they promise to provide an idealistic standard to govern the actions of AMAs,” and Wallach and Allen (2010: 83) maintain that “theories promise comprehensive solutions. If ethical principles or rules could be explicitly stated, acting ethically would just be a matter of following the rules. All that an AMA would need to do is compute whether its actions are allowed by the rules.” But, even these authors do not provide any substantial arguments for accepting these claims.

  5. Tolmeijer et al. (2020) are an exception with regard to predictability, but they merely mention that a machine that is trained via a top-down approach would have predictable behavior. They do not discuss this claim any further. Hooker and Kim (2018) are an exception with regard to scrutability, as they assert that a machine trained on their deontological approach would be able to explain its behavior “by citing the maximal action plan that generates the particular rule that prompted the action,” but nothing further is stated on the topic.

  6. cf. van Wynsberghe and Robbins (2019: 726) on complexity.

  7. cf. Bostrom and Yudkowsky’s (2014, their emphases) description of Deep Blue: “In creating a superhuman chess player, the human programmers necessarily sacrificed their ability to predict Deep Blue’s local, specific game behavior. Instead, Deep Blue’s programmers had (justifiable) confidence that Deep Blue’s chess moves would satisfy a non-local criterion of optimality: namely, that the moves would tend to steer the future of the game board into outcomes in the ‘winning’ region as defined by the chess rules.”.

  8. Another possibility here could be that we have failed to construct a set of algorithms that, when implemented, would guarantee the realization of the chosen ethical values or principles. As previously noted, though, I am currently assuming success at that task.

  9. The suggestion here, in effect, is that we ensure that the AI is scrutable to a certain extent, and that such AI would be scrutable is a virtue I will claim for top-down approaches in the next section.

  10. cf. Song and Yeung (2022) on domain specificity and the pluralist approach.

  11. See Sander-Staudt (2011).

  12. Notice, though, that the more pluralistic ethical conception need not necessarily be understood as denying the universality of ethics. The thought is just that different kinds of contexts might call for different kinds of ethical considerations.

  13. Of course there are questions to be asked concerning how we could distinguish faulty ethical decision-making from satisfactory decision-making, but I do not have the space to address these issues here.

  14. A concern might be raised here about my (perhaps optimistic) presumption that we would be capable of understanding how these AI reach their decisions even if we explicitly designed their decision procedures. Given our computational shortcomings, we cannot be too confident that we could follow all of their decision-making processes in every case. In response, while this might be true, we would at least be in a better position to follow their decision-making than we would be if their algorithms were continuously updated by the AI themselves. Song and Yeung (2022) claim that their pluralist hybrid system would limit opacity in the moral decision-making of AI, as it involves the implementation of some explicit algorithms, but ultimately they recognize that opacity would still exist due to the machine learning feature of their system. They assert that the risk of opacity can at least be supervised in that, “A supervised learning algorithm can be used to teach a machine to yield desired results and make the system more predictable,” but as I have already argued, we could not be as certain about the predictability of machine behavior if they were trained via this bottom-up style of approach, and predictability is extremely important when it comes to the ethical behavior of AI systems.

  15. Versions of this worry can be found in Allen et al. (2000), Gordon (2020), Wallach and Allen (2010), and Wallach et al. (2008).

  16. Of course the word ‘gratuitous’ is required here since (act-)Utilitarianism implies that such annihilation would be morally right if it maximized aggregate well-being.

  17. It would still be quite an accomplishment if we could create AI that behave ethically even in just these restricted contexts, I think, and perhaps we could reserve the more controversial ethical decision-making contexts for human deliberation. In fact, this is the kind of position I am currently attracted to. Nevertheless, I also think that many machine ethicists ultimately do aspire to design ethical AI that is not so limited.

  18. As Gabriel (2020) observes, though, whether or not there are any objective ethical truths, it could still be reasonable to align artificial intelligence systems with our first-order ethical beliefs, as this would be conducive to social cooperation and would help prevent artificial intelligence from having malicious goals, which seems desirable.

  19. It appears that there could be two ways of understanding the objection then: (i) we do not know which ethical theory is true, so we should not employ top-down approaches, and (ii) we cannot come up with a set of ethical principles that everyone will find agreeable, so we should not employ these approaches. It seems to me that discovering some agreeable principles is what we should really care about here, whether or not we are in a position to know their truth value. Generally speaking, we cannot be certain that our ethical beliefs are true, but we are going to live by them anyway as long as we find them agreeable.

  20. Admittedly this does give rise to another issue for top-down approaches, or more precisely top-down approaches that invoke multiple principles that could potentially conflict, namely, how to resolve conflicts of ethical principles in different situations (see Anderson and Anderson (2007), Cave et al. (2018), Gordon (2020), van Wynsberghe and Robbins (2019), Wallach and Allen (2010), and Wallach et al. (2008)). It should be noted that this is an instance of a more general philosophical issue that pluralistic ethical theories face, and there is no obvious solution to it, but I am inclined to assert that top-down approaches have some options here. Ross’s deontological pluralism (1930), which is widely regarded as a commonsense ethical theory, famously faces this issue of conflicts of principles, and Ross admits that while we can have knowledge of our most basic moral duties, we can only ever have “probable opinion” with respect to what we ought to do whenever multiple duties conflict such that they recommend different and incompatible courses of action. When it comes to designing ethical AI, it seems that we can learn from Ross. Ross suggests that some of our basic moral duties are weightier than others. For example, our duty to not harm others is typically more stringent than our duty to benefit others, although there are cases where very little harm could be done in order to substantially benefit others, in which case such harm would be permissible. Now, I suggest that we should not hold machines to a higher epistemic standard than ourselves with respect to this sort of issue, and if we hope to design ethical machines that adhere to a plurality of ethical principles via a top-down approach, our best available option might be to assign various principles differing weights on the basis of our intuitive judgments. Typically, people have the assistance of intuition and past experience whenever they encounter situations in which multiple principles conflict, and although machines might not enjoy similar assistance in their own encounters, we can still rely on our own intuitions and experience when it comes to designing algorithms for these machines to follow, and which might include some procedures for prioritizing different principles. Therefore, a machine’s prioritization of ethical principles need not be arbitrary. At any rate, I do not pretend to have totally addressed this issue of conflicts of ethical principles, but I do hope to have shown that the problem is not devastating for top-down approaches that utilize multiple principles.

  21. Moderate deontological views that posit deontological constraints (e.g. a constraint against lying), but also hold that such constraints could be permissibly violated in certain cases, face the sort of issue being described here. See Cook (2018) and Johnson (2020) for more on moderate deontology.

  22. See Gabriel (2020) for some similar suggestions.

  23. See Allen et al. (2000), Allen et al. (2006), Wallach and Allen (2010), and Wallach et al. (2008). As I mentioned in the first section of this paper, I have been assuming for the sake of argument that we are or could become capable of accomplishing this task, and now I will investigate that assumption.

  24. Some consequentialists deny that focusing on the consequences of one’s action options is an effective decision procedure for ethical behavior, at least in humans, and so one might wonder here whether we should think that the same applies for AI systems. Perhaps we could succeed at formulating an algorithmic procedure that would guarantee satisfaction of the act-utilitarian criterion of right action, if implemented, but that certainly would not entail that human persons ought to attempt to follow that procedure because it would probably be exceedingly complicated. In such a scenario, given the computational capabilities possessed by AI, it might very well be sensible for machines to follow the procedure, but humans might do better at satisfying the given criterion via other methods. I say more about the relationship between decision procedures and criteria of right action in note 27.

  25. For machines that are designed to satisfy a number of different ethical principles, then, the extent to which computational tractability is an issue will depend on what the combined computational requirements for implementation are, taking all of the principles into account.

  26. cf. Wallach et al. (2008): “Of course humans apply consequentialist and deontological reasoning to practical problems without calculating endlessly the utility or moral ramifications of an act in all possible situations. Our morality, just as our reasoning, is bounded by time, capacity, and inclination. Parameters might also be set to limit the extent to which a computational system analyzes the beneficial or imperatival consequences of a specific action.” They also go on to note the significance of possibly implementing heuristics in computational systems.

  27. Surely one could invent a better decision procedure than this, or at least make this one more precise, but I am only trying to illustrate. It is by now a familiar point among moral philosophers that the best decision procedures for satisfying certain consequentialist ethical theories, especially act-utilitarianism, are probably ones that do not reference any consequentialist criterion of right action or require someone to constantly be deliberating about the consequences of their actions, but instead include things like following directives such as ethical rules of thumb or forming certain dispositions (cf. Railton’s (1984) sophisticated consequentialism). Also, see Muehlhauser and Helm (2012: 108) for some concerns about a “machine superoptimizer” being programmed to maximize desire satisfaction in humans.

  28. These suggestions invite a concern about whether approximate satisfaction of a given ethical theory would be enough to secure predictability of machine behavior, though. This will be a matter of degree, or so I contend. If following some decision procedure satisfied a theory only half of the time, say, then that would dramatically lessen the extent to which a machine’s behavior would be predictable if it followed that procedure. Likewise, if following it satisfied the theory almost all of the time, then the machine would be much more predictable. Whether (or to what extent) a certain top-down approach could claim the virtue of predictability, then, would depend on the adequacy of its associated decision procedure(s).

  29. An interesting worry that could arise here stems from the fact that people could be incentivized to manipulate algorithms in their favor. If a team of programmers were in charge of creating an ethical decision procedure for a powerful AI, then they would have incentives to design it in such a way that the procedure does not in fact improve aggregate well-being, say, but rather benefits specific interested parties. To prevent this sort of activity, there would need to be considerable oversight and inspection of the outputs of the relevant algorithms.

  30. Purves et al. (2015) argue that the deployment of autonomous weapons systems (AWS) is morally problematic because those systems cannot engage in proper moral decision-making, and they cannot engage in such decision-making because they cannot replicate human moral judgment. Their argument is similar to mine since it focuses on considerations about whether moral decision-making is codifiable or rule-like, but their argument is supposed to target AI systems more generally too whereas mine is only directed at machines trained via top-down approaches. They write, “However artificial intelligence is created, it must be the product of a discrete list of instructions provided by humans,” but this is just plainly incorrect. Top-down approaches involve designing ethical AI in this way, but other design approaches, as mentioned earlier, might seek to develop ethical sensibilities in AI through various learning strategies, and I believe it is an open question whether or not such approaches could succeed at creating machines that engage in competent ethical decision-making. In addition, they are concerned with ethical issues that involve life or death targeting decisions being made by AWS, but non-codifiability concerns are especially plausible in that type of context, whereas other contexts that are not as ethically weighty or complicated might should be treated differently, as I will maintain. More recently, Song and Yeung (2022) maintain that top-down approaches assume that morality is codifiable, but they do not consider how advocates of top-down approaches could respond, which is what I intend to do.

  31. Here one might raise the following possibility as a concern for my argument: suppose that we designed an ethical manual, rigorously tested it in a host of hypothetical contexts, and discovered that users of the manual would do measurably better in those contexts than actual doctors do who are navigating such contexts. If this is possible, and it certainly seems that it is, then ethical manuals could turn out to be more reliable than I am suggesting. While I do not deny this possibility, though, I am skeptical about how seriously we should regard it. Possibilities are cheap, and even if we carried out such an experiment and made the aforementioned discovery, I would be inclined to assert that what this really indicates is the ethical incompetence (or unreliability) of doctors, not the reliability of ethical manuals. (I do not actually think doctors are so ethically unreliable, generally speaking! I wager that they would outperform an ethical manual.).

  32. Annas also attributes a version of this objection to Hursthouse (1999).

  33. See Dancy (2017) for some different formulations of moral particularism. Also, see Song and Yeung (2022) on the distinction between in-principle and in-practice non-codifiability.

  34. To make this point especially vivid, consider three examples of what I take to be commonsense general moral principles: (i) deception is bad, (ii) undeserved suffering is bad, and (iii) deserved pleasure is good. Obviously, these principles could come into conflict in certain situations, and nothing about the principles themselves entails how one ought to proceed in these situations, such as if one were forced to choose between performing an act of deception to ensure that undeserved suffering would not take place and performing an honest act that would result in the occurrence of some undeserved suffering and some deserved pleasure. The principles themselves could not assist one in making the right decision, but perhaps years of experience (including all of the other plausibility-enhancing features of ethical decision-making already mentioned) could be of assistance, and indeed we are inclined to believe that they would be.

  35. As mentioned earlier, it would be quite a letdown in the eyes of many machine ethicists if we could deploy ethical machines only in the easy cases, but I want to suggest now that this might be the best way forward despite the letdown.

  36. Purves et al. (2015) express a worry of this kind. They claim that even if AI could become as reliable as humans when it comes to making moral decisions, “their decisions would be morally deficient in the following respect: they could not be made for the right reasons.” Their worry targets autonomous weapons in particular, though, which would obviously be placed into ethically weighty contexts.

  37. Two things worth noting: (i) by ‘difficult ethical decision-making’, I mean the kind of decision-making that was illustrated earlier in the case involving the doctor and the question of who they should treat first. I take it that these types of situations are difficult precisely because there seem to be many ethically salient considerations in play and no algorithmic procedure by which to decide how one ought to proceed, and (ii) one might be left wondering how to determine what would count as an ethically salient consideration in the relevant scenarios. I intend to sidestep this question here not because it isn’t important or interesting, but because I am assuming that most of us have an intuitive grasp of such things (e.g. intuitively considerations relating to harm, fairness, autonomy, and the like are all ethically salient ones).

  38. It might be complained here that what one should really care about is that people agree with their own ethical conclusions, no matter their methods of getting there. For instance, one should not care about how someone reaches the conclusion that torturing babies for fun is wrong, but rather one should only care that the conclusion is reached at all rather than some incorrect conclusion (e.g. that torturing babies is morally required!). I respond by observing that the problem with this line of thought is that we have reason to care about people’s actual methods of moral reasoning not only because we care about people being reliable moral reasoners in novel situations but also because we care about people’s moral characters and desire that they be virtuous. To the extent that we desire that people be reliable moral reasoners who do not make random or unconsidered moral judgments, we should care about their methods, and furthermore virtuous people will care about being careful moral reasoners.

  39. See Kant (1996) on the distinction between acting from duty and acting merely in conformity with duty.

  40. See Véliz (2021) for an argument that sentience is necessary for moral agency, and so algorithms are not moral agents.

  41. See Gibert (2023: 136) who writes, “The term ‘virtuous robot’ is not literal. Machines lack the capacities needed to be considered genuinely virtuous: moral understanding, affective sensitivity, moral reflection, and moral imagination.” They cite Wallach and Vallor (2020) on this point. Constantinescu and Crisp (2022: 1549) also claim that AI systems do not qualify as virtuous entities, as they are “so far unable to display the right feelings, or any type of feelings, whatsoever.” On the importance of feelings and emotions in people’s perceptions of the reliability of artificial ethical decision-makers, see Bigman and Gray’s (2018) empirical study.

  42. A final concern that might be pressed here is that AI systems will eventually be used broadly in various types of situations, and because of this, we might need ethical machines in many such contexts that can accomplish more than just reliably generating accurate ethical conclusions in easy cases. In reply, although it is almost certainly true that AI will be nearly ubiquitous, it is ultimately up to us whether or not we want them to be engaged in ethical decision-making all over the place. If we are convinced that we need human ethical decision-makers in weighty or complicated contexts, then we simply must resist the temptation to introduce artificial ethical decision-makers there. As Sharkey (2020: 293) writes, “The challenge is to find the right path to steer between capitalising on and benefitting from the unique opportunities that robots can offer, and avoiding a future in which robots are placed in positions and roles that require a moral understanding that they do not have.”.

References

  • Adadi A, Berrada M (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6:52138–52160

    Article  Google Scholar 

  • Allen C, Varner G, Zinser J (2000) Prolegomena to any future artificial moral agent. J Exp Theor Artif Intell 12(3):251–261

    Article  Google Scholar 

  • Allen C, Smit I, Wallach W (2005) Artificial morality: top-down, bottom-up, and hybrid approaches. Ethics Inf Technol 7(3):149–155

    Article  Google Scholar 

  • Allen C, Wallach W, Smit I (2006) Why machine ethics? IEEE Intell Syst 21(4):12–17

    Article  Google Scholar 

  • Anderson M, Anderson SL (2007) Machine ethics: creating an ethical intelligent agent. AI Magazine 28(4): 15–26.

  • Anderson M, Anderson SL Ethical Healthcare Agents. In: Advanced Computational Intelligence Paradigms in Healthcare-3, M. Sordo, S. Vaidya, and L. C. Jain (eds.), (2008): 233–257. Berlin: Springer.

  • Annas J (2004) Being virtuous and doing the right thing. Proc Addresses Am Philosophical Assoc 78(2):61–75

    Article  Google Scholar 

  • Bigman YE, Gray K (2018) People are averse to machines making moral decisions. Cognition 181:21–34

    Article  Google Scholar 

  • Bostrom N, Yudkowsky E The Ethics of Artificial Intelligence, In: The Cambridge Handbook of Artificial Intelligence, Keith Frankish and William M. Ramsey (eds.), Cambridge University Press (2014): 316–334.

  • Cave S, Nyrup R, Vold K, Weller A (2018) Motivations and risks of machine ethics. Proc IEEE 107(3):562–574

    Article  Google Scholar 

  • Cervantes J-A, López S, Rodríguez L-F, Cervantes S, Cervantes F, Ramos F (2020) Artificial moral agents: a survey of the current status. Sci Eng Ethics 26(2):501–532

    Article  Google Scholar 

  • Cloos C (2005) The utilibot project: an autonomous mobile robot based on utilitarianism. In: AAAI Fall Symposium on Machine Ethics

  • Constantinescu M, Crisp R (2022) Can robotic AI systems be virtuous and why does this matter? Int J Soc Robot 14(6):1547–1557

    Article  Google Scholar 

  • Cook T (2018) Deontologists can be moderate. J Value Inq 52(2):199–212

    Article  Google Scholar 

  • Dancy J Moral Particularism, In: The Stanford Encyclopedia of Philosophy (Winter 2017 Edition), Edward N. Zalta (ed.). https://plato.stanford.edu/archives/win2017/entries/moral-particularism/.

  • Gabriel I (2020) Artificial intelligence, values, and alignment. Mind Mach 30(3):411–437

    Article  Google Scholar 

  • Gibert M (2023) The case for virtuous robots. AI and Ethics 3(1):135–144

    Article  Google Scholar 

  • Gordon J-S (2020) Building moral robots: ethical pitfalls and challenges. Sci Eng Ethics 26(1):141–157

    Article  MathSciNet  Google Scholar 

  • Hooker JN, Kim TWN (2018) Toward non-intuition-based machine and artificial intelligence ethics: a deontological approach based on modal logic. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society

  • Hursthouse R (1999) On Virtue Ethics. Oxford University Press

  • Johnson CM (2020) How deontologists can be moderate (and Why They Should Be). J Value Inq 54(2):227–243

    Article  Google Scholar 

  • Kant I (1996) Groundwork of the Metaphysics of Morals, in Practical Philosophy, Mary J. Gregor (ed.), Cambridge University Press

  • Muehlhauser L, Helm L (2012) The singularity and machine ethics. In: Eden A, Moor J, Søraker J, Steinhart E (eds) Singularity Hypotheses. Springer, Berlin, Heidelberg, pp 101–126

    Chapter  Google Scholar 

  • Purves D, Jenkins R, Strawser BJ (2015) Autonomous machines, moral judgment, and acting for the right reasons. Ethical Theory Moral Pract 18:851–872

    Article  Google Scholar 

  • Railton P (1984) Alienation, consequentialism, and the demands of morality. Philos Public Aff 13(2):134–171

    Google Scholar 

  • Ross WD (1930) The right and the good. Oxford University Press

  • Sander-Staudt M (2011) Care Ethics. In The Internet Encyclopedia of Philosophy, James Feiser (ed.). http://www.iep.utm.edu/care-eth/

  • Sharkey A (2020) Can we program or train robots to be good? Ethics Inf Technol 22(4):283–295

    Article  Google Scholar 

  • Song F, Yeung SHF (2022) A pluralist hybrid model for moral AIs. AI & Society: 1–10.

  • Tolmeijer S, Kneer M, Sarasua C, Christen M, Bernstein A (2020) Implementations in machine ethics: a survey. ACM Comput Surv (CSUR) 53(6):1–38

    Article  Google Scholar 

  • Véliz C (2021) Moral zombies: why algorithms are not moral agents. AI & Soc 36:487–497

    Article  Google Scholar 

  • Wallach W (2010) and Colin Allen. Teaching Robots Right from Wrong. Oxford University Press, Moral Machines

    Google Scholar 

  • Wallach W, Allen C, Smit I (2008) Machine morality: bottom-up and top-down approaches for modelling human moral faculties. AI & Soc 22(4):565–582

    Article  Google Scholar 

  • Wallach W, Vallor S Moral machines: from value alignment to embodied virtue, In: Ethics of Artificial Intelligence, M. Liao (ed.). Oxford University Press, 2020. 383–412.

  • Weld DS, Bansal G (2019) The Challenge of crafting intelligible intelligence. Commun ACM 62(6):70–79

    Article  Google Scholar 

  • van Wynsberghe A, Robbins S (2019) Critiquing the Reasons for Making Artificial Moral Agents. Science and Engineering Ethics 25(3): 719–735.

Download references

Acknowledgements

For helpful feedback on previous drafts of this paper, I thank Justin D’Arms, Eden Lin, Tristram McPherson, an anonymous reviewer for this journal, and participants in the philosophy dissertation seminar at Ohio State in the spring of 2021.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tyler Cook.

Ethics declarations

Conflict of interest

The author states that there is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cook, T. A qualified defense of top-down approaches in machine ethics. AI & Soc (2023). https://doi.org/10.1007/s00146-023-01820-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00146-023-01820-z

Keywords

Navigation