Dissertation, Sorbonne Université (
2021)
Copy
BIBTEX
Abstract
The ethics of emerging forms of artificial intelligence has become a prolific subject in both academic and public spheres. A great deal of these concerns flow from the need to ensure that these technologies do not cause harm—physical, emotional or otherwise—to the human agents with which they will interact. In the literature, this challenge has been met with the creation of artificial moral agents: embodied or virtual forms of artificial intelligence whose decision procedures are constrained by explicit normative principles, requiring the implementation of what is commonly called artificial morality into these agents. To date, the types of reasoning structures and principles which inform artificial morality have been of two kinds: first, an ethically maximal vision of artificial morality which relies on the strict implementation of traditional moral theories such as Kantian deontology or Utilitarianism, and second, a more minimalist vision which applies stochastic AI techniques to large data sets of human moral preferences so as to illicit or intuit general principles and preferences for the design of artificial morality. Taken individually, each approach is unable to fully answer the challenge of producing inoffensive behavior in artificial moral agents, most especially since both forms are unable to strike a balance between the ideal set of constraints which morality imposes on one hand, and the types of constraints public acceptability imposes, on the other. We provide an alternative approach to the design of artificial morality, the Ethical Valence Theory, whose purpose is to accommodate this balance, and apply this approach to the case of autonomous vehicles.