Skip to main content
Log in

Decolonization of AI: a Crucial Blind Spot

  • Research Article
  • Published:
Philosophy & Technology Aims and scope Submit manuscript

Abstract

Critics are calling for the decolonization of AI (artificial intelligence). The problem is that this technology is marginalizing other modes of knowledge with dehumanizing applications. What is needed to remedy this situation is the development of human-centric AI. However, there is a serious blind spot in this strategy that is addressed in this paper. The corrective that is usually proposed—participatory design—lacks the philosophical rigor to undercut the autonomy of AI, and thus the colonization spawned by this technology. A more radical or substantial proposal is advanced in this discussion that is known as community-based design. This alternative makes a theoretical maneuver that allows AI design to be directed by human agency, thereby introducing a safeguard that may help to prevent colonization by this technology.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Traditionally, colonization refers to the domination and exploitation of one country by another, including the inferiorization of every facet of life of those who are dominated (Memmi, 1968). In this discussion, the focus is on the domination of local knowledge by computer technology, particularly by the algorithms associated with AI.

  2. In this discussion, local refers to how persons, in workplaces or communities, interpret their situations and relationships, and act on the stock of knowledge that is accumulated.

  3. Note should be taken of recent challenges to the autonomy of AI. Critics such as Safiya Umoja Noble (2018) have pointed out the biases that have been built regularly into algorithms. The racial bias in algorithms is his under scrutiny (Cave and Dihal 2020). The problem, however, is that a lot of the correctives proposed rely on improved technology to reduce discrimination, instead of incorporating local knowledge into the creation of algorithms (Mitchell 2019).

  4. Although Descartes (1596–1650) is often associated with mind–body dualism, his influence extends beyond this division (Bordo 1987). Although certainly an offshoot of Descartes’ distinction, dualism has appeared in a variety disciplines, such as physics (Bernstein 1983 and medicine (Aho, 2008). In these cases, the so-called objective element is treated as autonomous and, thus, a source of valid knowledge, while the subjective is marginalized. Indeed, overcoming the subjective is essential to discovering valid knowledge.

  5. AI refers to Augmented Intelligence, which relates to the integration of human intelligence powered and “augmented” by artificial intelligence. Through this terminology, the point is to advance an image of technology that is used by humanity and not the other way around (Russell 2019).

  6. Chris Argyris and Donald Schön created a social methodology in the 1980s to contrast deep beliefs with the discourse of human beings. They called this method “The Left Column.” As part of the project on “FlourishingAI,” some innovations were made to this methodology. Now called “Situational Analysis,” this strategy has four additional columns, one for individual reflection, one for situational emergence, and two more for playbacking.

  7. Playbacking is another term for “member check” (Colaizzi 1978). Both of these terms refer to an iterative process between persons designed to achieve the dialogue proposed by Gadamer.

  8. Whether or not this community-based strategy can be used outside of organizations is an interesting question. Readers who are concerned about the application of this strategy to communities should consult the following: Murphy et al., (2022). “Introduction: Participatory Budgeting as Community-based Work”, American Behavioral Scientist, doi.org/10.1177/00027642221086952.

References

Download references

Funding

This article was supported by a Fellowship given by the Fulbright Commission (USA) and the Ministry of Science (Colombia) to Carlos Largacha-Martinez, as part of the “Visiting Scholar” Program, 2020–2021 cohort. Professor John W. Murphy acted as the representative of the host institution, the University of Miami.

Author information

Authors and Affiliations

Authors

Contributions

Authors John W. Murphy and Carlos Largacha-Martínez declare that they have done all the research needed for this scientific article.

Corresponding author

Correspondence to Carlos Largacha-Martínez.

Ethics declarations

All the ideas expressed here do not compromise any of the people that helped us, neither the networks that supported us, neither the Fundación Universitaria del Área Andina, University of Miami, nor the Fulbright Commission (US-Colombia).

Ethics Approval, Consent to Participate, and Consent to Publish

Authors John W. Murphy and Carlos Largacha-Martínez declare that for this research, they did not interview any human being nor have any relation to animals, so no ethical approval was required, neither consent to participate, neither consent to publish.

Conflict of Interest

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Murphy, J.W., Largacha-Martínez, C. Decolonization of AI: a Crucial Blind Spot. Philos. Technol. 35, 102 (2022). https://doi.org/10.1007/s13347-022-00588-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s13347-022-00588-2

Keywords

Navigation