Abstract
The emergence of AI is posing serious challenges to standard conceptions of moral status. New non-biological entities are able to act and make decisions rationally. The question arises, in this regard, as to whether AI systems possess or can possess the necessary properties to be morally considerable. In this chapter, we have undertaken a systematic analysis of the various debates that are taking place about the moral status of AI. First, we have discussed the possibility that AI systems, by virtue of its new agential capabilities, can be understood as a moral agent. Discussions between those defending mentalist and anti-mentalist positions have revealed many nuances and particularly relevant theoretical aspects. Second, given that an AI system can hardly be an entity qualified to be responsible, we have delved into the responsibility gap and the different ways of understanding and addressing it. Third, we have provided an overview of the current and potential patientist capabilities that AI systems possess. This has led us to analyze the possibilities of AI possessing moral patiency. In addition, we have addressed the question of the moral and legal rights of AI. Finally, we have introduced the two most relevant authors of the relational turn on the moral status of AI, Mark Coeckelbergh and David Gunkel, who have been led to defend a relational approach to moral life as a result of the problems associated with the ontological understanding of moral status.