The Bias Dilemma
The Ethics of Algorithmic Bias in Natural-Language Processing
DOI:
https://doi.org/10.5206/fpq/2022.3/4.14292Keywords:
artificial intelligence, algorithms, biasAbstract
Addressing biases in natural-language processing (NLP) systems presents an underappreciated ethical dilemma, which we think underlies recent debates about bias in NLP models. In brief, even if we could eliminate bias from language models or their outputs, we would thereby often withhold descriptively or ethically useful information, despite avoiding perpetuating or amplifying bias. Yet if we do not debias, we can perpetuate or amplify bias, even if we retain relevant descriptively or ethically useful information. Understanding this dilemma provides for a useful way of rethinking the ethics of algorithmic bias in NLP.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2022 Oisín Deery, Katherine Bailey
This work is licensed under a Creative Commons Attribution 4.0 International License.
The authors of work published in FPQ under the Creative Commons CC BY 4.0 License retain copyright to their work without restrictions and publication rights without restrictions. However, we request that authors include some sort of acknowledgement that the work was previously published in FPQ if part or all of a paper published in FPQ is used elsewhere.