Skip to main content

The Moral Status of AI Entities

  • Chapter
  • First Online:
Ethics of Artificial Intelligence

Abstract

The emergence of AI is posing serious challenges to standard conceptions of moral status. New non-biological entities are able to act and make decisions rationally. The question arises, in this regard, as to whether AI systems possess or can possess the necessary properties to be morally considerable. In this chapter, we have undertaken a systematic analysis of the various debates that are taking place about the moral status of AI. First, we have discussed the possibility that AI systems, by virtue of its new agential capabilities, can be understood as a moral agent. Discussions between those defending mentalist and anti-mentalist positions have revealed many nuances and particularly relevant theoretical aspects. Second, given that an AI system can hardly be an entity qualified to be responsible, we have delved into the responsibility gap and the different ways of understanding and addressing it. Third, we have provided an overview of the current and potential patientist capabilities that AI systems possess. This has led us to analyze the possibilities of AI possessing moral patiency. In addition, we have addressed the question of the moral and legal rights of AI. Finally, we have introduced the two most relevant authors of the relational turn on the moral status of AI, Mark Coeckelbergh and David Gunkel, who have been led to defend a relational approach to moral life as a result of the problems associated with the ontological understanding of moral status.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 129.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In a similar vein, environmentalist proposals argue for the balance of the biotic community as a fundamental property of non-human moral status (Callicott 1980; Leopold 2020).

  2. 2.

    Animals and ecosystems had previously been morally considered, but instrumentally so. Examples are the doctrine of indirect duties (Kant 2017), which understands that we have duties towards animals because of the effects that cruel treatment of them could have towards the consideration of other human beings; and the conservationist doctrine that defended the preservation of natural spaces on account of the aesthetic pleasure that they produced in human beings (Callicott 1990).

  3. 3.

    Some have argued that AI may one day become a superagent capable of greater capacity for moral agency than humans (Bostrom 2017).

  4. 4.

    By standard conceptions of moral status, we refer to the most dominant accounts that have been developed to answer the question about the possible inclusion of different entities in the circle of moral consideration before the emergence of AI. Christian ethics, Kantianism or utilitarianism give different answers to the inquiry about the criteria for determining moral status. However, they all agree that, because artifacts lack such properties relevant to moral status as sentience or rationality, they cannot be morally considered in themselves.

  5. 5.

    Utilitarianism defends an asymmetric conception of moral status, but only on the side of patiency. That is, it is not necessary to be both a moral agent and patient to have moral status, but it is sufficient to be solely the latter. This means that artifacts cannot have moral status either under this viewpoint.

  6. 6.

    When we refer to a conscious mind, we are understanding it as a biological mind. Even though some proposals in the philosophy of mind envision a full mind without biological anchorage (Fodor 2000), the lack of agreement leads us to endorse this minimum definition.

  7. 7.

    This is not to say that moral personhood is coincident with the human species. Human beings can have full moral status if they are moral persons, that is, if they have the properties required to have full moral status, such as conscience, rationality, etc. This should be kept in mind since being human is often confused with being a moral person. Thanks to an anonymous reviewer for pointing this out.

  8. 8.

    The importance of the level of abstraction can be argued using a thought experiment developed by Nicholas Agar (2020). If you found out after years of marriage that your wife is a robot who has no mind, even though she always behaved as if she had one, would you stop considering her morally? Intuitive rejections of this question place serious limits on markedly internalist approaches.

  9. 9.

    Although few authors argue that AI already possesses moral status (Nadeau 2006), many have argued that AI systems may in the future possess the internal properties that are associated with moral personhood (Ashrafian 2015; Schwitzgebel and Garza 2015).

  10. 10.

    This perspective can be denied from two different angles. On the one hand, although AI currently lacks the properties necessary to be a responsible moral agent, it is possible that in the future, due to technological progress, it may possess them. On the other hand, as we will see later in those who have advocated for a profound change in the concept of responsibility, AI can be responsible in the wake of totally new relations of responsibility.

  11. 11.

    We distinguish between instrument and machine because of the argument offered by Gunkel (2020). Gunkel argues that the main reason why the responsibility gap occurs is that we try to respond to the advent and importance of AI from the instrumentalist paradigm. However, AI can be understood not only as an instrument, but also as a machine. However, this has the problem that a machine, although distinct from an instrument in the independence of its behavior, is not widely different in terms of autonomy.

  12. 12.

    Strict liability is defined as an offense that, although a behavior punishable by the transgression of a norm, does not imply blameworthiness. By this we refer to the legal conditions on which the assumptions of strict liability are based. Kiener (2022) refers in particular to three: i) the offenses must be “not very serious”; ii) the ones responsible are usually those who benefit most from the damage produced; iii) these offenses do not carry any sufficiently serious stigma.

  13. 13.

    Hohfeld develops it as a theory of legal rights. However, because its contents are not specific to legal rights, but are shared by moral and political rights, some ethicists have applied them to these debates in a general manner (Andreotta 2021; Gunkel 2018).

  14. 14.

    The interests of animals and ecosystems are not based on the same assumptions. The defense of animal rights presupposes the possibility of being able to subjectively experience a certain degree of well-being. In contrast, ecosystem interests are holistically understood as needs for a certain natural environment to maintain its ecological and/or biotic balance (Callicott 1980).

  15. 15.

    By general level we refer to a specific part of Miller’s argument. One of the criticisms that can be made of Miller is that human beings can also be created for some purpose and that this does not make them lose their rights. However, Miller argues that being or not being produced according to a purpose should not be understood at the individual level but at the level of the existential origin of a species or typology of artifacts. The human species is the result of natural selection, a blind process devoid of teleology, quite the opposite of AI, a product of human purpose.

  16. 16.

    We use the term quasi-ontological to express the possibility of partially relational approaches. There are patientist positions that emphasize relations; however, these relations are marked by certain ontological properties such as moral agency (in the case of virtue ethics, see Cappuccio et al. 2020).

References

Download references

Acknowledgements

This chapter was written as a part of the research projects Digital Ethics. Moral Enhancement through an Interactive Use of Artificial Intelligence (PID2019-104943RB-I00), funded by the State Research Agency of the Spanish Government, and Moral enhancement and artificial intelligence. Ethical aspects of a Socratic virtual assistant (B-HUM-64-UGR20), funded by FEDER/ Junta de Andalucía—Consejería de Transformación Económica, Industria, Conocimiento y Universidades. The authors are also grateful for the insightful comments of Jan Deckers on a previous version of this chapter.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Joan Llorca Albareda .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Llorca Albareda, J., García, P., Lara, F. (2023). The Moral Status of AI Entities. In: Lara, F., Deckers, J. (eds) Ethics of Artificial Intelligence. The International Library of Ethics, Law and Technology, vol 41. Springer, Cham. https://doi.org/10.1007/978-3-031-48135-2_4

Download citation

Publish with us

Policies and ethics