Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (2021)

Authors
Will Fleisher
Northeastern University
Abstract
One of the main lines of research in algorithmic fairness involves individual fairness (IF) methods. Individual fairness is motivated by an intuitive principle, similar treatment, which requires that similar individuals be treated similarly. IF offers a precise account of this principle using distance metrics to evaluate the similarity of individuals. Proponents of individual fairness have argued that it gives the correct definition of algorithmic fairness, and that it should therefore be preferred to other methods for determining fairness. I argue that individual fairness cannot serve as a definition of fairness. Moreover, IF methods should not be given priority over other fairness methods, nor used in isolation from them. To support these conclusions, I describe four in-principle problems for individual fairness as a definition and as a method for ensuring fairness: (1) counterexamples show that similar treatment (and therefore IF) are insufficient to guarantee fairness; (2) IF methods for learning similarity metrics are at risk of encoding human implicit bias; (3) IF requires prior moral judgments, limiting its usefulness as a guide for fairness and undermining its claim to define fairness; and (4) the incommensurability of relevant moral values makes similarity metrics impossible for many tasks. In light of these limitations, I suggest that individual fairness cannot be a definition of fairness, and instead should be seen as one tool among several for ameliorating algorithmic bias.
Keywords Algorithmic Fairness  Individual Fairness  Ethics of AI  Incommensurable Values
Categories (categorize this paper)
Options
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

PhilArchive copy

 PhilArchive page | Other versions
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

What We Owe to Each Other.Thomas Scanlon - 1998 - Belknap Press of Harvard University Press.
Fact, Fiction, and Forecast.Nelson Goodman - 1955 - Harvard University Press.
Justice as Fairness: A Restatement.John Rawls (ed.) - 2001 - Harvard University Press.
Thinking, Fast and Slow.Daniel Kahneman - 2011 - New York: Farrar, Straus & Giroux.
The Concept of Law.Hla Hart - 1961 - Oxford University Press.

View all 22 references / Add more references

Citations of this work BETA

No citations found.

Add more citations

Similar books and articles

Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
A Moral Framework for Understanding of Fair ML Through Economic Models of Equality of Opportunity.Hoda Heidari - 2019 - Proceedings of the Conference on Fairness, Accountability, and Transparency 1.
On Fairness and Claims.Patrick Tomlin - 2012 - Utilitas 24 (2):200-213.
Fairness in Hierarchical and Entrepreneurial Firms.Michael K. Green - 1992 - Journal of Business Ethics 11 (11):877-882.
Hierarchical Consequentialism.Re'em Segev - 2010 - Utilitas 22 (3):309-330.
Fairness, Political Obligation, and the Justificatory Gap.Jiafeng Zhu - 2014 - Journal of Moral Philosophy (4):1-23.

Analytics

Added to PP index
2021-05-01

Total views
104 ( #102,463 of 2,445,403 )

Recent downloads (6 months)
104 ( #5,873 of 2,445,403 )

How can I increase my downloads?

Downloads

My notes