Skip to main content
Log in

Proportionality principle for the ethics of artificial intelligence

  • Commentary
  • Published:
AI and Ethics Aims and scope Submit manuscript

Abstract

This commentary explores the principle of proportionality as a possible solution to unresolved problems pertaining to the tensions among principles in various ethical frameworks for artificial intelligence (AI). Conceptual and procedural divergences in the sets of principles reveal uncertainty as to which ethical principles should be prioritized and how conflicts between them should be resolved. Moreover, there are externalities of employing the currently dominant AI methods, in particular for the environment. The principle of proportionality and a framework of tests of necessity, desirability, and suitability can address some of the underlying issues and to ensure that other societal priorities are well taken into account. It is argued that at least in certain scenarios the perceived tensions can be false dichotomies. Proportionality presents a set of conditions to satisfy to justify the usage of certain AI methods, which can be further expanded to justifying using AI systems as such for a particular purpose.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Those principles surface in different areas, both in private and in public sector, but also regional and broader international principles emerge on a political level, e.g. the OECD AI Principles [21] and the EU Ethics guidelines for trustworthy AI [16].

  2. Albeit there is criticism of this approach, see e.g. [4].

  3. This commentary reviews the idea behind applying the principle of proportionality in the framework of ethics of AI and is not an interpretation of the principle of Proportionality and Do No Harm as provided in the adopted Recommendation on the Ethics of Artificial Intelligence [33]. The ‘do no harm’ principle is not addressed for its distinct, albeit related character, while recognizing its power to guide the balancing of competing principles.

  4. Use of AI in war is also currently the subject of discussions at the UN level (see [31]) as well as public debate.

References

  1. Alexy, R.: Proportionality and Rationality. In: Jackson, V., Tushnet M. (eds.) Proportionality: New Frontiers, New Challenges, pp. 13–29. Cambridge University Press, Cambridge (2017)

  2. Aquinas, T.: Summae theologica. Treat. Law 1, 81–82 (1965)

    Google Scholar 

  3. Aristotle: Nicomachean Ethics, Book V, p. 133. Routledge, New York (1910)

  4. Bender, E.M., Gebru, T., McMillan-Major, A., Shmitchell, S.: On the dangers of stochastic parrots: Can language models be too big? (2021) https://doi.org/10.1145/3442188.3445922

  5. Cicero, M.T.: Treatise on the commonwealth. In: Barham, F. (eds.) The Political Works of Marcus Tullius Cicero: Comprising His Treatise on the Commonwealth and His Treatise on the Laws, Translated from the Original, with Dissertations and Notes in Two Volumes, vol. 1, pp. 1841–1842

  6. CJEU: Case 11–70 Internationale Handelsgesellschaft mbH v. Einfuhr-und Vorratsstelle für Getreide und Futtermittel, ECLI:EU:C:1970:114 (1970)

  7. Des Places, S.B.: Revisiting proportionality in internal market law: looking at the unnamed actors in the CJEU’S reasoning. 89 Nordic J. Int. Law 2020, 286 (2020)

  8. Dhar, P.: The carbon impact of artificial intelligence. Nat. Mach. Intell. 2, 423–425 (2020)

    Article  Google Scholar 

  9. Engle, E.: History of the General Principle of Proportionality: An Overview. Dartmouth LJ, New York (2012)

  10. Giles, M.: The race to power AI’s silicon brains. MIT Technology Review (2021). https://www.technologyreview.com/2017/11/20/147557/the-race-to-power-ais-silicon-brains/

  11. Gill-Pedro, E., Linderfalk, U.: Proportionality in international law: Whose interests count? 89 Nordic J. Int. Law 275, 275 (2020)

  12. Gosepath, S.: Equality. In: Zalta, E. (ed.), The Stanford Encyclopedia of Philosophy (2011). https://plato.stanford.edu/archives/spr2011/entries/equality/

  13. Hao, K.: AI can’t predict how a child’s life will turn out even with a ton of data. MIT Technology Review (2020). https://www.technologyreview.com/2020/04/02/998478/ai-machine-learning-social-outcome-prediction-study/

  14. Harbo, T.I.: The function of the proportionality principle in EU law. Eur. Law J. 16, 158–185 (2010)

    Article  Google Scholar 

  15. Harvey, D.: Federal proportionality review in EU law: Whose rights are they anyway? 89 Nordic J. Int. Law 303, 1 (2020)

  16. HLEG: Ethics Guidelines for Trustworthy AI (2019). https://op.europa.eu/s/wG0P

  17. Jiang, R., Chiappa, S., Lattimore, T., György, A., Kohli, P.: Degenerate feedback loops in recommender systems. In: AIES 2019—Proceeding of the 2019 AAAI/ACM Conference of AI, Ethics, Society, pp. 383–390 (2019). https://doi.org/10.1145/3306618.3314288

  18. Jobin, A., Ienca, M., Vayena, E.: The global landscape of AI ethics guidelines. Nat. Mach. Intell. 1, 389–399 (2019)

    Article  Google Scholar 

  19. McFarland, A.: Two students develop software to combat CO2 caused by AI (2020). https://www.unite.ai/two-students-develop-software-to-combat-co2-caused-by-ai/

  20. Möller, K.: “Balancing as reasoning” and the problems of legally unaided adjudication: a rejoinder to Francisco Urbina. 12 Int. J. Const. Law 222, 222 (2014)

  21. OECD: Recommendation of the Council on Artificial Intelligence, OECD/LEGAL/0449 (2019). https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449

  22. Open AI: AI and efficiency (2020). https://openai.com/blog/ai-and-efficiency/

  23. Price, W., Nicholson, I.I.: Regulating Black–Box medicine. Mich. L. Rev. 116, 421 (2017)

    Article  Google Scholar 

  24. Rolnick, D., et al.: Tackling climate change with machine learning (2019). https://arxiv.org/abs/1906.05433

  25. Sartor, G.: A quantitative approach to proportionality. In: Bongiovanni, G., et al. (eds.) Handbook of Legal Reasoning and Argumentation (2018). https://doi.org/10.1007/978-90-481-9452-0_21

  26. Steinhardt, R.G.: Book Review, European Administrative Law, 28 Geo. Wash. J. Int’l L. & Econ. 225 (1994)

  27. Strubell, E., Ganesh, A., McCallum, A.: Energy and policy considerations for deep learning in NLP. In: ACL 2019—57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, p. 3645 (2020)

  28. Sucholutsky, I., Schonlau, M.: Less than one’-shot learning: learning N classes from M < N samples (2020). http://arxiv.org/abs/2009.08449

  29. Sun, et al.: Ultra-low precision 4-bit training of deep neural networks (2020). https://papers.nips.cc/paper/2020/file/13b919438259814cd5be8cb45877d577-Paper.pdf

  30. Sweet, A.S., Mathews, J., Sweet, A.S., Mathews, J.: Constitutions, rights, and judicial power. In: Proportionality Balancing and Constitutional Governance (2019). https://doi.org/10.1093/oso/9780198841395.003.0001

  31. UN: Background on LAWS in the CCW (2020). https://www.un.org/disarmament/the-convention-on-certain-conventional-weapons/background-on-laws-in-the-ccw/

  32. UNESCO: UNESCO member states adopt the first ever global agreement on the Ethics of Artificial Intelligence (2021). https://en.unesco.org/news/unesco-member-states-adopt-first-ever-global-agreement-ethics-artificial-intelligence

  33. UNESCO: Recommendation on the ethics of artificial intelligence (2021). https://unesdoc.unesco.org/ark:/48223/pf0000379920#page=14

  34. United Nations: Sustainable Development Goals (2015). https://www.un.org/sustainabledevelopment/sustainable-development-goals/

  35. Whittlestone, J., et al.: The role and limits of principles in AI ethics: towards a focus on tensions. In: AIES 2019—Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (2019)

  36. World Economic Forum: Global Technology Governance A Multistakeholder Approach (2019). https://www.weforum.org/whitepapers/global-technology-governance-a-multistakeholder-approach

Download references

Acknowledgements

I am grateful to Professor Hamid Ekbia for his invaluable comments and suggestions and Louise Moutel for research assistance regarding the concept of proportionality in AI-related documents.

Funding

No funding was received to assist with the preparation of this manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maksim Karliuk.

Ethics declarations

Competing interest

The author has no relevant financial or non-financial interest to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The ideas and opinions expressed in this article are those of the author and do not necessarily represent the view of UNESCO.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Karliuk, M. Proportionality principle for the ethics of artificial intelligence. AI Ethics 3, 985–990 (2023). https://doi.org/10.1007/s43681-022-00220-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s43681-022-00220-1

Keywords

Navigation