Skip to main content

Advertisement

Log in

Disability, fairness, and algorithmic bias in AI recruitment

  • Original Paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

While rapid advances in artificial intelligence (AI) hiring tools promise to transform the workplace, these algorithms risk exacerbating existing biases against marginalized groups. In light of these ethical issues, AI vendors have sought to translate normative concepts such as fairness into measurable, mathematical criteria that can be optimized for. However, questions of disability and access often are omitted from these ongoing discussions about algorithmic bias. In this paper, I argue that the multiplicity of different kinds and intensities of people’s disabilities and the fluid, contextual ways in which they manifest point to the limits of algorithmic fairness initiatives. In particular, existing de-biasing measures tend to flatten variance within and among disabled people and abstract away information in ways that reinforce pathologization. While fair machine learning methods can help mitigate certain disparities, I argue that fairness alone is insufficient to secure accessible, inclusive AI. I then outline a disability justice approach, which provides a framework for centering disabled people’s experiences and attending to the structures and norms that underpin algorithmic bias.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data availability

Not applicable.

Materials availability

Not applicable.

Code availability

Not applicable.

Notes

  1. I use “disabled person,” or identity-first language, instead of “person with disabilities,” or person-first language, as the latter can separate people from their disabilities and treat the term as negative (Ladau, 2015). While people should not be recognized solely on the basis of on their disabilities, disabilities are not merely tacked on to a person (Brown, 2015). Indeed, disability rights activists claimed disability as a source of community and pride in the 1990s (Linton, 2010), which continues in modern activist campaigns such as #SayTheWord (Andrews et al., 2019).

  2. For example, the Social Security Administration understands disability with respect to the capacity for gainful employment; the Americans with Disabilities Act speaks of it in terms of essential functions, major life activities, and reasonable accommodations; and Worker’s Compensation defines it as percent of ability lost (Samuels, 2014). That one may be recognized as disabled under some but not all of these definitions does not just reflect bureaucratic errors but fundamental conflicts about how to classify the multiplicity of disabled people’s embodiments.

  3. Disability is a fuzzy, porous category and whether one identifies with it hinges on a host of factors, such as the model at hand (Brown & Broido, 2020). Indeed, one of the central disputes in disability studies concerns how to define disability (Kafer, 2013; Samuels, 2014). This coalitional account is thus a minimal one that captures features of other models, i.e., it theorizes disability as at least in part non-medical and ableism as a structural power relation.

  4. While data scientists can track the inputs and outputs of complex machine learning AI systems, they cannot do the same for its internal data processing operations, which occur in a black box (Castelvecchi, 2016). For example, an algorithm might detect predictors of job performance that vendors cannot recognize easily or explain.

  5. Some endorse demographic parity, which equalizes selection rates in each group, regardless of outcomes (Lipton et al., 2018) For instance, an algorithm might rank job applicants within protected classes and select the top 5%. However, direct use of membership in these classes is disparate treatment and thus illegal (Romanov et al., 2019).

  6. Indeed, similar tensions emerge when implementing calibration’s equal predictive values and the neutrality of anti-classification in legal contexts (Green & Viljoen, 2020). This suggests the incompatibility of these methods stems not from the translation of fairness into technical terms but deeper conflicts between their normative assumptions.

  7. Even if data is anonymized, it can reveal disability status. For example, researchers found they could predict blindness from Twitter activity (Morris et al., 2016) and Parkinson’s through mouse tracking (White et al., 2018). They were also able to reidentify people by combining anonymized data with public records (Emam et al., 2011).

  8. Indeed, the predictive validity of most hiring software has yet to be independently verified (Raghavan et al., 2020). Moreover, studies suggest that more subjective, qualitative measures of competence can be at least as accurate as ‘objective’ ones, as they better capture contextual aspects of workplace settings (Singh et al., 2016; Vij & Bedi, 2016).

  9. Given the multitude of different disabilities, some have suggested expanding the number of disability categories that AI consider (Trewin, 2018). However, there may be too few people who have disabilities that are experienced in similar ways for AI to detect patterns, especially because many disabilities are comorbid and high-dimensionality data is difficult to capture even with large datasets (Givens & Morris, 2020).

References

Download references

Funding

No funding was received to assist with the preparation of this manuscript.

Author information

Authors and Affiliations

Authors

Contributions

Not applicable.

Corresponding author

Correspondence to Nicholas Tilmes.

Ethics declarations

Conflict of interest

The authors have no conflict of interest to declare that are relevant to the content of this article.

Ethical approval

Not applicable.

Consent to publish

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tilmes, N. Disability, fairness, and algorithmic bias in AI recruitment. Ethics Inf Technol 24, 21 (2022). https://doi.org/10.1007/s10676-022-09633-2

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10676-022-09633-2

Keywords

Navigation