skip to main content
10.1145/3617694.3623248acmconferencesArticle/Chapter ViewAbstractPublication PageseaamoConference Proceedingsconference-collections
research-article

An Epistemic Lens on Algorithmic Fairness

Published:30 October 2023Publication History

ABSTRACT

In this position paper, we introduce a new epistemic lens for analyzing algorithmic harm. We argue that the epistemic lens we propose herein has two key contributions to help reframe and address some of the assumptions underlying inquiries into algorithmic fairness.

First, we argue that using the framework of epistemic injustice helps to identify the root causes of harms currently framed as instances of representational harm. We suggest that the epistemic lens offers a theoretical foundation for expanding approaches to algorithmic fairness in order to address a wider range of harms not recognized by existing technical or legal definitions.

Second, we argue that the epistemic lens helps to identify the epistemic goals of inquiries into algorithmic fairness. There are two distinct contexts within which we examine algorithmic harm: at times, we seek to understand and describe the world as it is, and, at other times, we seek to build a more just future. The epistemic lens can serve to direct our attention to the epistemic frameworks that shape our interpretations of the world as it is and the ways we envision possible futures. Clarity with respect to which epistemic context is relevant in a given inquiry can further help inform choices among the different ways of measuring and addressing algorithmic harms. We introduce this framework with the goal of initiating new research directions bridging philosophical, legal, and technical approaches to understanding and mitigating algorithmic harms.

References

  1. 1964. Section 2000e-3(b) of Title VII of the Civil Rights Act of 1964. 42 U.S.C. § 2000e-3(b).Google ScholarGoogle Scholar
  2. 1964. Title VII of the Civil Rights Act of 1964. 42 U.S.C. § 2000e et seq.Google ScholarGoogle Scholar
  3. 1967. Age Discrimination in Employment Act of 1967. 29 U.S.C. §§ 621–634.Google ScholarGoogle Scholar
  4. 1967. Section 623(e) of Age Discrimination in Employment Act of 1967. 29 U.S.C. § 623(e).Google ScholarGoogle Scholar
  5. 1968. Fair Housing Act. 42 U.S.C. § 3601 et seq.Google ScholarGoogle Scholar
  6. 1968. Section 3604(c) of the Fair Housing Act. 42 U.S.C. § 3604(c).Google ScholarGoogle Scholar
  7. 1974. Equal Credit Opportunity Act. 15 U.S.C. § 1691 et seq.Google ScholarGoogle Scholar
  8. 1986. Meritor Savings Bank v. Vinson. 477 U.S. 57 (1986).Google ScholarGoogle Scholar
  9. Muhammad Ali, Piotr Sapiezynski, Miranda Bogen, Aleksandra Korolova, Alan Mislove, and Aaron Rieke. 2019. Discrimination through Optimization: How Facebook’s Delivery Can Lead to Skewed Outcomes. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (November 2019), 1–30.Google ScholarGoogle Scholar
  10. Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine Bias. ProPublica (23 May 2016). https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencingGoogle ScholarGoogle Scholar
  11. Julia Angwin and Terry Parris, Jr.2016. Facebook Lets Advertisers Exclude Users by Race. ProPublica (28 October 2016). https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-raceGoogle ScholarGoogle Scholar
  12. Jack M. Balkin and Reva B. Siegel. 2003. The American Civil Rights Tradition: Anticlassification or Antisubordination?U. Miami L. Rev. 58 (2003), 9–34.Google ScholarGoogle Scholar
  13. Solon Barocas. 2017. What is the Problem to Which Fair Machine Learning is the Solution?. Presentation at AI Now. (10 July 2017). https://ainowinstitute.org/symposia/videos/what-is-the-problem-to-which-fair-machine-learning-is-the-solution.htmlGoogle ScholarGoogle Scholar
  14. Solon Barocas, Kate Crawford, Aaron Shapiro, and Hanna Wallach. 2017. The problem with bias: from allocative to representational harms in machine learning. Special Interest Group for Computing, Information and Society (2017).Google ScholarGoogle Scholar
  15. Alistair Barr. 2015. Google Mistakenly Tags Black People as ‘Gorillas,’ Showing Limits of Algorithms. Wall Street Journal (1 July 2015). https://www.wsj.com/articles/BL-DGB-42522Google ScholarGoogle Scholar
  16. Derrick Bell. 1987. And We Are Not Saved: The Elusive Quest for Racial Justice.Google ScholarGoogle Scholar
  17. Yochai Benkler, Rob Faris, and Harold Roberts. 2018. Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. Oxford University Press, New York, NY.Google ScholarGoogle ScholarCross RefCross Ref
  18. Katharina Buchholz. 2022. Only 15 Percent of CEOs at Fortune 500 Companies are Female. Statista (8 March 2022). https://www.statista.com/chart/13995/female-ceos-in-fortune-500-companiesGoogle ScholarGoogle Scholar
  19. Joy Buolamwini and Timnit Gebru. 2018. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency(Proceedings of Machine Learning Research, Vol. 81), Sorelle A. Friedler and Christo Wilson (Eds.). PMLR, 77–91. https://proceedings.mlr.press/v81/buolamwini18a.htmlGoogle ScholarGoogle Scholar
  20. Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2016. Semantics derived automatically from language corpora necessarily contain human biases. CoRR abs/1608.07187 (2016). arXiv:1608.07187http://arxiv.org/abs/1608.07187Google ScholarGoogle Scholar
  21. Alexandra Chouldechova. 2017. Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments. Big Data 5, 2 (2017), 153–163. https://doi.org/10.1089/big.2016.0047Google ScholarGoogle ScholarCross RefCross Ref
  22. Danielle Keats Citron and Frank A. Pasquale. 2014. The Scored Society: Due Process for Automated Predictions. Washington Law Review 89 (2014), 1–33.Google ScholarGoogle Scholar
  23. Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. 2017. Algorithmic decision making and the cost of fairness. CoRR abs/1701.08230 (2017). http://arxiv.org/abs/1701.08230Google ScholarGoogle Scholar
  24. Kate Crawford. 2017. The Trouble with Bias. Keynote address. Neural Information Processing Systems (2017). https://www.youtube.com/watch?v=fMym_BKWQzkGoogle ScholarGoogle Scholar
  25. Kimberlé Williams Crenshaw. 1988. Race, Reform, and Retrenchment: Transformation and Legitimation in Antidiscrimination Law. Harvard Law Review 101, 7 (May 1988), 1331–1387.Google ScholarGoogle ScholarCross RefCross Ref
  26. Jeffrey Dastin. 2018. Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women. Reuters (9 Oct. 2018).Google ScholarGoogle Scholar
  27. Kristie Dotson. 2014. Conceptualizing Epistemic Oppression. Social Epistemology 28 (2014), 115–138. Issue 2.Google ScholarGoogle ScholarCross RefCross Ref
  28. Trone Dowd. 2021. The Deadly Consequences of Carrying a Cell Phone While Black. Vice News (4 Mar. 2021).Google ScholarGoogle Scholar
  29. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard S. Zemel. 2011. Fairness Through Awareness. CoRR abs/1104.3913 (2011). arXiv:1104.3913http://arxiv.org/abs/1104.3913Google ScholarGoogle Scholar
  30. Virginia Eubanks. 2017. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press, New York, NY.Google ScholarGoogle Scholar
  31. Owen M. Fiss. 1976. Groups and the Equal Protection Clause. Phil. & Pub. Aff. 5 (1976), 107–177.Google ScholarGoogle Scholar
  32. Miranda Fricker. 2007. Epistemic Injustice: Power and the Ethics of Knowing. Oxford University Press, New York, NY.Google ScholarGoogle ScholarCross RefCross Ref
  33. Sorelle A. Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. 2021. The (Im)Possibility of Fairness: Different Value Systems Require Different Mechanisms for Fair Decision Making. Commun. ACM 64, 4 (March 2021), 136–143. https://doi.org/10.1145/3433949Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Alvin I. Goldman. 2010. Epistemic Relativism and Reasonable Disagreement. In Disagreement, Richard Feldman and Ted A. Warfield (Eds.). Oxford University Press, Oxford, 187–215.Google ScholarGoogle Scholar
  35. Ayelet Gordon-Tapiero, Alexandra Wood, and Katrina Ligett. 2022. The Case for Establishing a Collective Perspective to Address the Harms of Platform Personalization. In Proceedings of the 2022 Symposium on Computer Science and Law (Washington DC, USA) (CSLAW ’22). Association for Computing Machinery, New York, NY, USA, 119–130.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Ayelet Gordon-Tapiero, Alexandra Wood, and Katrina Ligett. 2023. The Case for Establishing a Collective Perspective to Address the Harms of Platform Personalization. Vanderbilt Journal of Entertainment & Technology Law 25 (2023), 635–689.Google ScholarGoogle Scholar
  37. Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of Opportunity in Supervised Learning. In Advances in Neural Information Processing Systems, D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (Eds.). Vol. 29. Curran Associates, Inc.https://proceedings.neurips.cc/paper/2016/file/9d2682367c3935defcb1f9e247a97c0d-Paper.pdfGoogle ScholarGoogle Scholar
  38. Melissa Heikkila. 2022. The viral AI avatar app Lensa undressed me–without my consent. MIT Technology Review (12 December 2022). https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/Google ScholarGoogle Scholar
  39. Basileal Imana, Aleksandra Korolova, and John Heidemann. 2021. Auditing for Discrimination in Algorithms Delivering Job Ads. In Proceedings of the Web Conference 2021 (Ljubljana, Slovenia) (WWW ’21). Association for Computing Machinery, New York, NY, USA, 3767–3778. https://doi.org/10.1145/3442381.3450077Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Elisa Jillson. 2021. Aiming for truth, fairness, and equity in your company’s use of AI. Federal Trade Commission Business Blog (19 April 2021).Google ScholarGoogle Scholar
  41. Levi Kaplan, Nicole Gerzon, Alan Mislove, and Piotr Sapiezynski. 2022. Measurement and Analysis of Implied Identity in Ad Delivery Optimization. In Proceedings of the 22nd ACM Internet Measurement Conference (Nice, France) (IMC ’22). Association for Computing Machinery, New York, NY, USA, 195–209. https://doi.org/10.1145/3517745.3561450Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Jared Katzman, Solon Barocas, Su Lin Blodgett, Kristen Laird, Morgan Klaus Scheuerman, and Hanna Wallach. 2021. Representational Harms in Image Tagging. In Beyond Fair Computer Vision Workshop at CVPR 2021.Google ScholarGoogle Scholar
  43. Pauline T. Kim. 2020. Manipulating Opportunity. Virginia Law Review 106 (2020), 867–935.Google ScholarGoogle Scholar
  44. Pauline T. Kim and Erika Hanson. 2016. People Analytics and the Regulation of Information Under the Fair Credit Reporting Act. Saint Louis University Law Journal 16 (2016), 17–34.Google ScholarGoogle Scholar
  45. Pauline T. Kim and Sharion Scott. 2018. Discrimination in Online Employment Recruiting. St. Louis University Law Journal 63 (2018), 93–118.Google ScholarGoogle Scholar
  46. Jon M. Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2016. Inherent Trade-Offs in the Fair Determination of Risk Scores. CoRR abs/1609.05807 (2016). arXiv:1609.05807http://arxiv.org/abs/1609.05807Google ScholarGoogle Scholar
  47. James Kuczmarski. 2018. Reducing gender bias in Google Translate. Google Blog (6 Dec. 2018).Google ScholarGoogle Scholar
  48. Katrina Ligett and Kobbi Nissim. 2020. Data Co-Ops: Challenges, and How to Get There. DIMACS Workshop on Co-Development of Computer Science and Law (11 November 2020). https://youtu.be/ZZugFpAOA64Google ScholarGoogle Scholar
  49. Katrina Ligett and Kobbi Nissim. 2020. Data Cooperatives in the Real World: Progress and Challenges. Radical Exchange Conference RxC 2020 (19 June 2020). https://youtu.be/vUbuOiyosjIGoogle ScholarGoogle Scholar
  50. Katrina Ligett, Kobbi Nissim, and Matt Prewitt. 2021. Computer users of the world, unite. The Boston Globe (15 October 2021). https://www.bostonglobe.com/2021/10/15/opinion/computer-users-world-unite/Google ScholarGoogle Scholar
  51. Zachary Lipton, Julian McAuley, and Alexandra Chouldechova. 2018. Does mitigating ML’s impact disparity require treatment disparity?. In Advances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.). Vol. 31. Curran Associates, Inc.https://proceedings.neurips.cc/paper/2018/file/8e0384779e58ce2af40eb365b318cc32-Paper.pdfGoogle ScholarGoogle Scholar
  52. Michael P. Lynch. 2010. Epistemic Circularity and Epistemic Incommensurability. In Social Epistemology, A. Haddock, A. Millar, and D. Pritchard (Eds.). Oxford University Press, Oxford, 262–277.Google ScholarGoogle Scholar
  53. José Medina. 2013. The Epistemology of Resistance: Gender and Racial Oppression, Epistemic Injustice, and Resistant Imaginations. Oxford University Press, New York, NY.Google ScholarGoogle Scholar
  54. Cecilia Muñoz, Megan Smith, and DJ Patil. 2016. Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights. Technical Report. Executive Office of the President, Washington, DC. https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/2016_0504_data_discrimination.pdfGoogle ScholarGoogle Scholar
  55. Arvind Narayanan. 2018. 21 Fairness Definitions and Their Politics. Tutorial for Conf. Fairness, Accountability & Transparency (23 February 2018). https://www.youtube.com/watch?v=jIXIuYdnyykGoogle ScholarGoogle Scholar
  56. Arvind Narayanan. 2022. The limits of the quantitative approach to discrimination. 2022 James Baldwin lecture, Princeton University (11 October 2022). https://www.cs.princeton.edu/ arvindn/talks/baldwin-discrimination/baldwin-discrimination-transcript.pdfGoogle ScholarGoogle Scholar
  57. Safiya Umoja Noble. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press, New York, NY.Google ScholarGoogle Scholar
  58. US Department of Justice. 2022. Justice Department Secures Groundbreaking Settlement Agreement with Meta Platforms, Formerly Known as Facebook, to Resolve Allegations of Discriminatory Advertising: Lawsuit is the Department’s First Case Challenging Algorithmic Discrimination Under the Fair Housing Act; Meta Agrees to Change its Ad Delivery System. (21 June 2022). https://www.justice.gov/opa/pr/justice-department-secures-groundbreaking-settlement-agreement-meta-platforms-formerly-knownGoogle ScholarGoogle Scholar
  59. US Department of Justice. 2023. Justice Department and Meta Platforms Inc. Reach Key Agreement as They Implement Groundbreaking Resolution to Address Discriminatory Delivery of Housing Advertisements. (9 January 2023). https://www.justice.gov/opa/pr/justice-department-and-meta-platforms-inc-reach-key-agreement-they-implement-groundbreakingGoogle ScholarGoogle Scholar
  60. Frank Pasquale. 2015. The Black Box Society: The Secret Algorithms that Control Money and Information. Harvard University Press, Cambridge, MA.Google ScholarGoogle Scholar
  61. Peter Railton. 2006. Normative Guidance. In Oxford Studies in Metaethics, Russ Shafer-Landau (Ed.). Vol. 1. Oxford University Press, Oxford, 3–34.Google ScholarGoogle Scholar
  62. Andrew Smith. 2020. Using Artificial Intelligence and Algorithms. Federal Trade Commission Business Blog (8 April 2020).Google ScholarGoogle Scholar
  63. Harini Suresh and John Guttag. 2021. A Framework for Understanding Sources of Harm in the Machine Learning Life Cycle. Proc. ACM Equity & Access in Algorithms, Mechanisms & Optimization (2021). http://doi.org/10.1145/3465416.3483305Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Latanya Sweeney. 2013. Discrimination in Online Ad Delivery: Google Ads, Black Names and White Names, Racial Discrimination, and Click Advertising. Queue 11, 3 (March 2013), 10–29. https://doi.org/10.1145/2460276.2460278Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Briana Toole. 2021. What Lies Beneath: The Epistemic Roots of White Supremacy. In Political Epistemology, Elizabeth Edenberg and Michael Hannon (Eds.). Oxford University Press, Oxford, 76–94.Google ScholarGoogle Scholar
  66. American Civil Liberties Union. 2019. In Historic Decision on Digital Bias, EEOC Finds Employers Violated Federal Law When They Excluded Women and Older Workers from Facebook Job Ads. Press Release. (25 September 2019). https://www.aclu.org/press-releases/historic-decision-digital-bias-eeoc-finds-employers-violated-federal-law-when-theyGoogle ScholarGoogle Scholar
  67. Angelina Wang, Solon Barocas, Kristen Laird, and Hanna Wallach. 2022. Measuring Representational Harms in Image Captioning. In 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 324–335. https://doi.org/10.1145/3531146.3533099Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. An Epistemic Lens on Algorithmic Fairness

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      EAAMO '23: Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization
      October 2023
      498 pages
      ISBN:9798400703812
      DOI:10.1145/3617694

      Copyright © 2023 Owner/Author

      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 30 October 2023

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited

      Upcoming Conference

      EAAMO '24
      Equity and Access in Algorithms, Mechanisms, and Optimization
      October 30 - November 1, 2024
      San Luis Potosi , Mexico

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format