Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy
DOI:
https://doi.org/10.5206/fpq/2022.3/4.14347Keywords:
algorithmic bias, explainable artificial intelligence, feminist epistemology, situated knowledge, epistemic injusticeAbstract
Artificial intelligence (AI) systems are increasingly adopted to make decisions in domains such as business, education, health care, and criminal justice. However, such algorithmic decision systems can have prevalent biases against marginalized social groups and undermine social justice. Explainable artificial intelligence (XAI) is a recent development aiming to make an AI system’s decision processes less opaque and to expose its problematic biases. This paper argues against technical XAI, according to which the detection and interpretation of algorithmic bias can be handled more or less independently by technical experts who specialize in XAI methods. Drawing on resources from feminist epistemology, we show why technical XAI is mistaken. Specifically, we demonstrate that the proper detection of algorithmic bias requires relevant interpretive resources, which can only be made available, in practice, by actively involving a diverse group of stakeholders. Finally, we suggest how feminist theories can help shape integrated XAI: an inclusive social-epistemic process that facilitates the amelioration of algorithmic bias.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2022 Hsiang-Yun Chen, Linus Ta-Lun Huang, Ying-Tung Lin, Tsung-Ren Huang, Tzu-Wei Hung
This work is licensed under a Creative Commons Attribution 4.0 International License.
The authors of work published in FPQ under the Creative Commons CC BY 4.0 License retain copyright to their work without restrictions and publication rights without restrictions. However, we request that authors include some sort of acknowledgement that the work was previously published in FPQ if part or all of a paper published in FPQ is used elsewhere.