Skip to main content

Advertisement

Log in

The possibility of deliberate norm-adherence in AI

  • Original Paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

Moral agency status is often given to those individuals or entities which act intentionally within a society or environment. In the past, moral agency has primarily been focused on human beings and some higher-order animals. However, with the fast-paced advancements made in artificial intelligence (AI), we are now quickly approaching the point where we need to ask an important question: should we grant moral agency status to AI? To answer this question, we need to determine the moral agency status of these entities in society. In this paper I argue that to grant moral agency status to an entity, deliberate norm-adherence must be possible (at a minimum). In this paper I argue that, under the current status quo, AI systems are unable to meet this criterion. The novel contribution this paper makes to the field of machine ethics is first, to provide at least two criteria with which we can determine moral agency status. We do this by determining the possibility of deliberate norm-adherence through examining the possibility of deliberate norm-violation. Second, to show that establishing moral agency in AI suffer the same pitfalls as establishing moral agency in constitutive accounts of agency.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. The level of abstraction is determined, according to Floridi and Sanders, “by the way in which one chooses to describe, analyse and discuss a system and its context. LoA is formalised in the concept of ‘interface’, which consists of a set of features, the observables. Agenthood, and in particular moral agenthood, depends on a LoA”.

  2. for more on moral patiency and AI, see Gunkel (2012).

  3. 1 I think it should go without saying that as AI progresses in the future, what is discussed here may no longer be relevant in the next few decades. But this is one of the unfortunate consequences of doing research in such a fast-paced industry.

  4. No doubt, far more criteria need to be added in order for us to truly determine the morality-status of an entity. But here, I only want to introduce two.

  5. Many thanks to Christoph Hanisch who proposed that I use the terms norm-compliance and norm-endorsement.

  6. I appreciate that I am setting a tacit counterfactual condition here that should, in a longer account, be a) more carefully worked out, and b) related to standard versions of the Principle of Alternate Possibilities (starting with Frankfurt 1969). But I hope that the intuitive point I am trying to make here is clear without getting bogged down in the intricacies of either the vast literature on counterfactuals or on PAP.

  7. Acknowledgement to Veli Mitova who used this term in discussions we had.

  8. There are, of course, various ways of getting around this, such as voluntarist approaches or a hybrid theory which sees the merging of voluntarist and constitutivist features (Bratman 2007; Korsgaard 2008; Katsafanas 2013; Rosati 1995, 2003, 2016; Tiffany 2012). However, these approaches face their own set of problems which I cannot go into here for brevity sake. This paper is not particularly interested in proving the legitimacy of constitutivism, but is rather only interested in the constitutive relationship between agency and norms, where norms are adhered to in virtue of them being part and parcel of the features constituting agency.

References

  • Bratman, M. (2007). Structures of agency. New York: Oxford University Press.

    Book  Google Scholar 

  • Castelfranchi, C., Dignum, F., Jonker, C., & Treur, J. (2000). Deliberative normative agents: Principles and architecture. Intelligent agents (pp. 364–378). Berlin: Springer.

    MATH  Google Scholar 

  • Coeckelbergh, M. (2009). Virtual moral agency, virtual moral responsibility: On the moral significance of the appearance, perception, and performance of artificial agents. AI and Society, 1, 10–25. https://doi.org/10.1007/s00146-009-0208-3.

    Article  Google Scholar 

  • Davidson, D. (1963). Actions, reasons, and causes. The Journal of Philosophy, 60(23), 685–700.

    Article  Google Scholar 

  • Enoch, D. (2006). Agency, Shmagency: Why normativity won't come from what is constitutive of action. Philosophical Review, 115(2), 31–60.

    Article  Google Scholar 

  • Ferrero, L. (2009). Constitutivism and the inescapability of agency. Oxford Studies in Metaethics, IV, 303–333.

  • Floridi, L., Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14, 349–379.

    Article  Google Scholar 

  • Frankfurt, H. (1969). Alternative possibilities and moral responsibility. Journal of Philosophy, 66(23), 829–839.

    Article  Google Scholar 

  • Gunkel, D. (2012). The machine question: Critical perspectives on AI, robots, and ethics. Cambridge: MIT Press.

    Book  Google Scholar 

  • Hansson, S. (1994). Decision theory: A brief introduction. Stockholm: Royal Institute of Technology.

    Google Scholar 

  • Huffer, B. (2007). Actions and outcomes: Two aspects of agency. Synthese, 157, 241–265.

    Article  MathSciNet  Google Scholar 

  • Johnson, A., Hathcock, D. (n.d). Study abroad and moral development. Ejournal of Public Affairs, 3(3), 52–70.

  • Kant, I. (1785). Groundwork for the metaphysics of morals. In: A. Wood (Ed.) Groundwork for the metaphysics of morals. New York: Yale University.

  • Katsafanas, P. (2013). Agency and the foundation of ethics: Nietzschean constitutivism. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Korsgaard, C. (2008). The constitution of agency. Essays on practical reason and moral psychology. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Korsgaard, C. (2009). Self-constitution: Agency, identity, and integrity. Oxford: Oxford University Press.

    Book  Google Scholar 

  • McKenna, M. & Coates, J., (2018). Compatibilism. [Online]. Retrieved March 25, 2019, from https://plato.stanford.edu/archives/win2018/entries/compatibilism/.

  • Moor, J. (2011). The nature, importance, and difficulty of machine ethics. In M. Ethics (Ed.), Anderson & Anderson (pp. 13–20). New York: Cambridge University Press.

    Google Scholar 

  • Muller, V. (2019). Ethics of AI and robotics. Retrieved August 15, 2019, from https://www.researchgate.net/project/Ethics-of-AI-and-Robotics-for-Stanford-Encyclopedia-of-Philosophy.

  • Railton, P. (2003). On the hypothetical and non-hypothetical in reasoning about belief and action. Ethics and practical reason (pp. 53–80). Oxford: Clarendon Press.

    Google Scholar 

  • Rosati, C. (1995). Naturalism, normativity, and the open argument question. Nous, 29(1), 46–70.

    Article  Google Scholar 

  • Rosati, C. (2003). Agency and the open question argument. Ethics, 113(3), 490–527.

    Article  Google Scholar 

  • Rosati, C. (2016). Agents and "shmagents" an essay on agency and normativity. In R. Shafer-Landau (Ed.), Oxford studies in metaethics 11 (pp. 182–213). Oxford: Oxford University Press.

    Chapter  Google Scholar 

  • Tiffany, E. (2012). Why be an agent? Australasia Journal of Philosophy, 90(2), 223–233.

    Article  Google Scholar 

  • Velleman, D. (1996). The possibility of practical reason. Ethics, 106(4), 694–726.

    Article  Google Scholar 

  • Velleman, D. (2004). Replies to discussion on the possibility of practical reason. Philosophical Studies, 121, 225–238.

    Article  Google Scholar 

  • Warfield, T. (2000). Causal determination and human freedom is incompatible: A new argument for incompatibilism. Nous, 34, 167–180.

    Article  Google Scholar 

Download references

Acknowledgements

Many thanks to Veli Mitova for her encouragement, invaluable feedback and assistance. Thanks to Thaddeus Metz for his advice regarding article writing and feedback on a related project which greatly informed this one. Further thanks to Samuel Segun for his very helpful comments. Finally, thanks to all at the university of Johannesburg and SolBridge International School of Business who facilitated and assisted.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Danielle Swanepoel.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Swanepoel, D. The possibility of deliberate norm-adherence in AI. Ethics Inf Technol 23, 157–163 (2021). https://doi.org/10.1007/s10676-020-09535-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10676-020-09535-1

Keywords

Navigation