ABSTRACT
In this position paper, we introduce a new epistemic lens for analyzing algorithmic harm. We argue that the epistemic lens we propose herein has two key contributions to help reframe and address some of the assumptions underlying inquiries into algorithmic fairness.
First, we argue that using the framework of epistemic injustice helps to identify the root causes of harms currently framed as instances of representational harm. We suggest that the epistemic lens offers a theoretical foundation for expanding approaches to algorithmic fairness in order to address a wider range of harms not recognized by existing technical or legal definitions.
Second, we argue that the epistemic lens helps to identify the epistemic goals of inquiries into algorithmic fairness. There are two distinct contexts within which we examine algorithmic harm: at times, we seek to understand and describe the world as it is, and, at other times, we seek to build a more just future. The epistemic lens can serve to direct our attention to the epistemic frameworks that shape our interpretations of the world as it is and the ways we envision possible futures. Clarity with respect to which epistemic context is relevant in a given inquiry can further help inform choices among the different ways of measuring and addressing algorithmic harms. We introduce this framework with the goal of initiating new research directions bridging philosophical, legal, and technical approaches to understanding and mitigating algorithmic harms.
- 1964. Section 2000e-3(b) of Title VII of the Civil Rights Act of 1964. 42 U.S.C. § 2000e-3(b).Google Scholar
- 1964. Title VII of the Civil Rights Act of 1964. 42 U.S.C. § 2000e et seq.Google Scholar
- 1967. Age Discrimination in Employment Act of 1967. 29 U.S.C. §§ 621–634.Google Scholar
- 1967. Section 623(e) of Age Discrimination in Employment Act of 1967. 29 U.S.C. § 623(e).Google Scholar
- 1968. Fair Housing Act. 42 U.S.C. § 3601 et seq.Google Scholar
- 1968. Section 3604(c) of the Fair Housing Act. 42 U.S.C. § 3604(c).Google Scholar
- 1974. Equal Credit Opportunity Act. 15 U.S.C. § 1691 et seq.Google Scholar
- 1986. Meritor Savings Bank v. Vinson. 477 U.S. 57 (1986).Google Scholar
- Muhammad Ali, Piotr Sapiezynski, Miranda Bogen, Aleksandra Korolova, Alan Mislove, and Aaron Rieke. 2019. Discrimination through Optimization: How Facebook’s Delivery Can Lead to Skewed Outcomes. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (November 2019), 1–30.Google Scholar
- Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine Bias. ProPublica (23 May 2016). https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencingGoogle Scholar
- Julia Angwin and Terry Parris, Jr.2016. Facebook Lets Advertisers Exclude Users by Race. ProPublica (28 October 2016). https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-raceGoogle Scholar
- Jack M. Balkin and Reva B. Siegel. 2003. The American Civil Rights Tradition: Anticlassification or Antisubordination?U. Miami L. Rev. 58 (2003), 9–34.Google Scholar
- Solon Barocas. 2017. What is the Problem to Which Fair Machine Learning is the Solution?. Presentation at AI Now. (10 July 2017). https://ainowinstitute.org/symposia/videos/what-is-the-problem-to-which-fair-machine-learning-is-the-solution.htmlGoogle Scholar
- Solon Barocas, Kate Crawford, Aaron Shapiro, and Hanna Wallach. 2017. The problem with bias: from allocative to representational harms in machine learning. Special Interest Group for Computing, Information and Society (2017).Google Scholar
- Alistair Barr. 2015. Google Mistakenly Tags Black People as ‘Gorillas,’ Showing Limits of Algorithms. Wall Street Journal (1 July 2015). https://www.wsj.com/articles/BL-DGB-42522Google Scholar
- Derrick Bell. 1987. And We Are Not Saved: The Elusive Quest for Racial Justice.Google Scholar
- Yochai Benkler, Rob Faris, and Harold Roberts. 2018. Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. Oxford University Press, New York, NY.Google ScholarCross Ref
- Katharina Buchholz. 2022. Only 15 Percent of CEOs at Fortune 500 Companies are Female. Statista (8 March 2022). https://www.statista.com/chart/13995/female-ceos-in-fortune-500-companiesGoogle Scholar
- Joy Buolamwini and Timnit Gebru. 2018. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency(Proceedings of Machine Learning Research, Vol. 81), Sorelle A. Friedler and Christo Wilson (Eds.). PMLR, 77–91. https://proceedings.mlr.press/v81/buolamwini18a.htmlGoogle Scholar
- Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2016. Semantics derived automatically from language corpora necessarily contain human biases. CoRR abs/1608.07187 (2016). arXiv:1608.07187http://arxiv.org/abs/1608.07187Google Scholar
- Alexandra Chouldechova. 2017. Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments. Big Data 5, 2 (2017), 153–163. https://doi.org/10.1089/big.2016.0047Google ScholarCross Ref
- Danielle Keats Citron and Frank A. Pasquale. 2014. The Scored Society: Due Process for Automated Predictions. Washington Law Review 89 (2014), 1–33.Google Scholar
- Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. 2017. Algorithmic decision making and the cost of fairness. CoRR abs/1701.08230 (2017). http://arxiv.org/abs/1701.08230Google Scholar
- Kate Crawford. 2017. The Trouble with Bias. Keynote address. Neural Information Processing Systems (2017). https://www.youtube.com/watch?v=fMym_BKWQzkGoogle Scholar
- Kimberlé Williams Crenshaw. 1988. Race, Reform, and Retrenchment: Transformation and Legitimation in Antidiscrimination Law. Harvard Law Review 101, 7 (May 1988), 1331–1387.Google ScholarCross Ref
- Jeffrey Dastin. 2018. Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women. Reuters (9 Oct. 2018).Google Scholar
- Kristie Dotson. 2014. Conceptualizing Epistemic Oppression. Social Epistemology 28 (2014), 115–138. Issue 2.Google ScholarCross Ref
- Trone Dowd. 2021. The Deadly Consequences of Carrying a Cell Phone While Black. Vice News (4 Mar. 2021).Google Scholar
- Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard S. Zemel. 2011. Fairness Through Awareness. CoRR abs/1104.3913 (2011). arXiv:1104.3913http://arxiv.org/abs/1104.3913Google Scholar
- Virginia Eubanks. 2017. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press, New York, NY.Google Scholar
- Owen M. Fiss. 1976. Groups and the Equal Protection Clause. Phil. & Pub. Aff. 5 (1976), 107–177.Google Scholar
- Miranda Fricker. 2007. Epistemic Injustice: Power and the Ethics of Knowing. Oxford University Press, New York, NY.Google ScholarCross Ref
- Sorelle A. Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. 2021. The (Im)Possibility of Fairness: Different Value Systems Require Different Mechanisms for Fair Decision Making. Commun. ACM 64, 4 (March 2021), 136–143. https://doi.org/10.1145/3433949Google ScholarDigital Library
- Alvin I. Goldman. 2010. Epistemic Relativism and Reasonable Disagreement. In Disagreement, Richard Feldman and Ted A. Warfield (Eds.). Oxford University Press, Oxford, 187–215.Google Scholar
- Ayelet Gordon-Tapiero, Alexandra Wood, and Katrina Ligett. 2022. The Case for Establishing a Collective Perspective to Address the Harms of Platform Personalization. In Proceedings of the 2022 Symposium on Computer Science and Law (Washington DC, USA) (CSLAW ’22). Association for Computing Machinery, New York, NY, USA, 119–130.Google ScholarDigital Library
- Ayelet Gordon-Tapiero, Alexandra Wood, and Katrina Ligett. 2023. The Case for Establishing a Collective Perspective to Address the Harms of Platform Personalization. Vanderbilt Journal of Entertainment & Technology Law 25 (2023), 635–689.Google Scholar
- Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of Opportunity in Supervised Learning. In Advances in Neural Information Processing Systems, D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (Eds.). Vol. 29. Curran Associates, Inc.https://proceedings.neurips.cc/paper/2016/file/9d2682367c3935defcb1f9e247a97c0d-Paper.pdfGoogle Scholar
- Melissa Heikkila. 2022. The viral AI avatar app Lensa undressed me–without my consent. MIT Technology Review (12 December 2022). https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/Google Scholar
- Basileal Imana, Aleksandra Korolova, and John Heidemann. 2021. Auditing for Discrimination in Algorithms Delivering Job Ads. In Proceedings of the Web Conference 2021 (Ljubljana, Slovenia) (WWW ’21). Association for Computing Machinery, New York, NY, USA, 3767–3778. https://doi.org/10.1145/3442381.3450077Google ScholarDigital Library
- Elisa Jillson. 2021. Aiming for truth, fairness, and equity in your company’s use of AI. Federal Trade Commission Business Blog (19 April 2021).Google Scholar
- Levi Kaplan, Nicole Gerzon, Alan Mislove, and Piotr Sapiezynski. 2022. Measurement and Analysis of Implied Identity in Ad Delivery Optimization. In Proceedings of the 22nd ACM Internet Measurement Conference (Nice, France) (IMC ’22). Association for Computing Machinery, New York, NY, USA, 195–209. https://doi.org/10.1145/3517745.3561450Google ScholarDigital Library
- Jared Katzman, Solon Barocas, Su Lin Blodgett, Kristen Laird, Morgan Klaus Scheuerman, and Hanna Wallach. 2021. Representational Harms in Image Tagging. In Beyond Fair Computer Vision Workshop at CVPR 2021.Google Scholar
- Pauline T. Kim. 2020. Manipulating Opportunity. Virginia Law Review 106 (2020), 867–935.Google Scholar
- Pauline T. Kim and Erika Hanson. 2016. People Analytics and the Regulation of Information Under the Fair Credit Reporting Act. Saint Louis University Law Journal 16 (2016), 17–34.Google Scholar
- Pauline T. Kim and Sharion Scott. 2018. Discrimination in Online Employment Recruiting. St. Louis University Law Journal 63 (2018), 93–118.Google Scholar
- Jon M. Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2016. Inherent Trade-Offs in the Fair Determination of Risk Scores. CoRR abs/1609.05807 (2016). arXiv:1609.05807http://arxiv.org/abs/1609.05807Google Scholar
- James Kuczmarski. 2018. Reducing gender bias in Google Translate. Google Blog (6 Dec. 2018).Google Scholar
- Katrina Ligett and Kobbi Nissim. 2020. Data Co-Ops: Challenges, and How to Get There. DIMACS Workshop on Co-Development of Computer Science and Law (11 November 2020). https://youtu.be/ZZugFpAOA64Google Scholar
- Katrina Ligett and Kobbi Nissim. 2020. Data Cooperatives in the Real World: Progress and Challenges. Radical Exchange Conference RxC 2020 (19 June 2020). https://youtu.be/vUbuOiyosjIGoogle Scholar
- Katrina Ligett, Kobbi Nissim, and Matt Prewitt. 2021. Computer users of the world, unite. The Boston Globe (15 October 2021). https://www.bostonglobe.com/2021/10/15/opinion/computer-users-world-unite/Google Scholar
- Zachary Lipton, Julian McAuley, and Alexandra Chouldechova. 2018. Does mitigating ML’s impact disparity require treatment disparity?. In Advances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.). Vol. 31. Curran Associates, Inc.https://proceedings.neurips.cc/paper/2018/file/8e0384779e58ce2af40eb365b318cc32-Paper.pdfGoogle Scholar
- Michael P. Lynch. 2010. Epistemic Circularity and Epistemic Incommensurability. In Social Epistemology, A. Haddock, A. Millar, and D. Pritchard (Eds.). Oxford University Press, Oxford, 262–277.Google Scholar
- José Medina. 2013. The Epistemology of Resistance: Gender and Racial Oppression, Epistemic Injustice, and Resistant Imaginations. Oxford University Press, New York, NY.Google Scholar
- Cecilia Muñoz, Megan Smith, and DJ Patil. 2016. Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights. Technical Report. Executive Office of the President, Washington, DC. https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/2016_0504_data_discrimination.pdfGoogle Scholar
- Arvind Narayanan. 2018. 21 Fairness Definitions and Their Politics. Tutorial for Conf. Fairness, Accountability & Transparency (23 February 2018). https://www.youtube.com/watch?v=jIXIuYdnyykGoogle Scholar
- Arvind Narayanan. 2022. The limits of the quantitative approach to discrimination. 2022 James Baldwin lecture, Princeton University (11 October 2022). https://www.cs.princeton.edu/ arvindn/talks/baldwin-discrimination/baldwin-discrimination-transcript.pdfGoogle Scholar
- Safiya Umoja Noble. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press, New York, NY.Google Scholar
- US Department of Justice. 2022. Justice Department Secures Groundbreaking Settlement Agreement with Meta Platforms, Formerly Known as Facebook, to Resolve Allegations of Discriminatory Advertising: Lawsuit is the Department’s First Case Challenging Algorithmic Discrimination Under the Fair Housing Act; Meta Agrees to Change its Ad Delivery System. (21 June 2022). https://www.justice.gov/opa/pr/justice-department-secures-groundbreaking-settlement-agreement-meta-platforms-formerly-knownGoogle Scholar
- US Department of Justice. 2023. Justice Department and Meta Platforms Inc. Reach Key Agreement as They Implement Groundbreaking Resolution to Address Discriminatory Delivery of Housing Advertisements. (9 January 2023). https://www.justice.gov/opa/pr/justice-department-and-meta-platforms-inc-reach-key-agreement-they-implement-groundbreakingGoogle Scholar
- Frank Pasquale. 2015. The Black Box Society: The Secret Algorithms that Control Money and Information. Harvard University Press, Cambridge, MA.Google Scholar
- Peter Railton. 2006. Normative Guidance. In Oxford Studies in Metaethics, Russ Shafer-Landau (Ed.). Vol. 1. Oxford University Press, Oxford, 3–34.Google Scholar
- Andrew Smith. 2020. Using Artificial Intelligence and Algorithms. Federal Trade Commission Business Blog (8 April 2020).Google Scholar
- Harini Suresh and John Guttag. 2021. A Framework for Understanding Sources of Harm in the Machine Learning Life Cycle. Proc. ACM Equity & Access in Algorithms, Mechanisms & Optimization (2021). http://doi.org/10.1145/3465416.3483305Google ScholarDigital Library
- Latanya Sweeney. 2013. Discrimination in Online Ad Delivery: Google Ads, Black Names and White Names, Racial Discrimination, and Click Advertising. Queue 11, 3 (March 2013), 10–29. https://doi.org/10.1145/2460276.2460278Google ScholarDigital Library
- Briana Toole. 2021. What Lies Beneath: The Epistemic Roots of White Supremacy. In Political Epistemology, Elizabeth Edenberg and Michael Hannon (Eds.). Oxford University Press, Oxford, 76–94.Google Scholar
- American Civil Liberties Union. 2019. In Historic Decision on Digital Bias, EEOC Finds Employers Violated Federal Law When They Excluded Women and Older Workers from Facebook Job Ads. Press Release. (25 September 2019). https://www.aclu.org/press-releases/historic-decision-digital-bias-eeoc-finds-employers-violated-federal-law-when-theyGoogle Scholar
- Angelina Wang, Solon Barocas, Kristen Laird, and Hanna Wallach. 2022. Measuring Representational Harms in Image Captioning. In 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 324–335. https://doi.org/10.1145/3531146.3533099Google ScholarDigital Library
Index Terms
- An Epistemic Lens on Algorithmic Fairness
Recommendations
Disambiguating Algorithmic Bias: From Neutrality to Justice
AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and SocietyAs algorithms have become ubiquitous in consequential domains, societal concerns about the potential for discriminatory outcomes have prompted urgent calls to address algorithmic bias. In response, a rich literature across computer science, law, and ...
Socially disruptive technologies and epistemic injustice
AbstractRecent scholarship on technology-induced ‘conceptual disruption’ has spotlighted the notion of a conceptual gap. Conceptual gaps have also been discussed in scholarship on epistemic injustice, yet up until now these bodies of work have remained ...
Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms
AbstractThe increasing implementation of and reliance on machine-learning (ML) algorithms to perform tasks, deliver services and make decisions in health and healthcare have made the need for fairness in ML, and more specifically in healthcare ML ...
Comments