Abstract
Critics are calling for the decolonization of AI (artificial intelligence). The problem is that this technology is marginalizing other modes of knowledge with dehumanizing applications. What is needed to remedy this situation is the development of human-centric AI. However, there is a serious blind spot in this strategy that is addressed in this paper. The corrective that is usually proposed—participatory design—lacks the philosophical rigor to undercut the autonomy of AI, and thus the colonization spawned by this technology. A more radical or substantial proposal is advanced in this discussion that is known as community-based design. This alternative makes a theoretical maneuver that allows AI design to be directed by human agency, thereby introducing a safeguard that may help to prevent colonization by this technology.
Similar content being viewed by others
Notes
Traditionally, colonization refers to the domination and exploitation of one country by another, including the inferiorization of every facet of life of those who are dominated (Memmi, 1968). In this discussion, the focus is on the domination of local knowledge by computer technology, particularly by the algorithms associated with AI.
In this discussion, local refers to how persons, in workplaces or communities, interpret their situations and relationships, and act on the stock of knowledge that is accumulated.
Note should be taken of recent challenges to the autonomy of AI. Critics such as Safiya Umoja Noble (2018) have pointed out the biases that have been built regularly into algorithms. The racial bias in algorithms is his under scrutiny (Cave and Dihal 2020). The problem, however, is that a lot of the correctives proposed rely on improved technology to reduce discrimination, instead of incorporating local knowledge into the creation of algorithms (Mitchell 2019).
Although Descartes (1596–1650) is often associated with mind–body dualism, his influence extends beyond this division (Bordo 1987). Although certainly an offshoot of Descartes’ distinction, dualism has appeared in a variety disciplines, such as physics (Bernstein 1983 and medicine (Aho, 2008). In these cases, the so-called objective element is treated as autonomous and, thus, a source of valid knowledge, while the subjective is marginalized. Indeed, overcoming the subjective is essential to discovering valid knowledge.
AI refers to Augmented Intelligence, which relates to the integration of human intelligence powered and “augmented” by artificial intelligence. Through this terminology, the point is to advance an image of technology that is used by humanity and not the other way around (Russell 2019).
Chris Argyris and Donald Schön created a social methodology in the 1980s to contrast deep beliefs with the discourse of human beings. They called this method “The Left Column.” As part of the project on “FlourishingAI,” some innovations were made to this methodology. Now called “Situational Analysis,” this strategy has four additional columns, one for individual reflection, one for situational emergence, and two more for playbacking.
Playbacking is another term for “member check” (Colaizzi 1978). Both of these terms refer to an iterative process between persons designed to achieve the dialogue proposed by Gadamer.
Whether or not this community-based strategy can be used outside of organizations is an interesting question. Readers who are concerned about the application of this strategy to communities should consult the following: Murphy et al., (2022). “Introduction: Participatory Budgeting as Community-based Work”, American Behavioral Scientist, doi.org/10.1177/00027642221086952.
References
Adler, P. S., & Winograd, T. (1996). The usability challenge. In P. S. Adler & T. Winograd (Eds.), Usability: Turning Technology into Tools (pp. 3–14). Oxford University Press.
Aho, J. (2008). Body Matters: A Phenomenology of Sickness, Disease, and Illness. Lexington Books.
Argyris, C. (2010). Organizational Traps: Leadership, Culture, Organizational Design. Oxford University Press.
Appadurai, A. (2001). Globalization. Duke University Press.
Appadurai, A. (2013). Number in the colonial imagination. In C. A. Breckenridge, & P. van der Veer (Eds.), Orientalism and the Postcolonial Predicament: Perspectives on South Asia (pp. 314-339). University of Pennsylvania Press.
Auernhammer, Jan. (2020) Human-centered AI: The role of Human-centered Design Research in the development of AI, in Boess, S., Cheung, M. and Cain, R. (eds.), Synergy - DRS International Conference 2020, 11–14 August, Held online. https://doi.org/10.21606/drs.2020.282
Berger, P. L., & Luckmann, T. (1966). The Social Construction of Reality. Doubleday.
Bernstein, R. J. (1983). Beyond Objectivism and Relativism: Science, Hermeneutics, and Praxis. University of Pennsylvania Press.
Birhane, A. (2019). “The algorithmic colonization of Africa”, Real Life, July 19. Reallifemag.com/the-algorithmic-colonization-of-Africa.
Bodker, S. (1996). Creating conditions for participation: Conflicts and resources in systems development. Human-Computer Interaction, 11(3), 215–236. https://doi.org/10.1207/s15327051hci1103_2
Bordo, S. (1987). The Flight to Objectivity: Essays on Cartesianism and Culture. SUNY Press.
Bostrom, N. (2014). Superintelligence: Paths, Danger, Strategies. Oxford University Press.
Braverman, H. (1975). Labor and Monopoly Capital. Monthly Review Press.
Cave, S., & Dihal, K. (2020). The whiteness of AI. Philosophy and Technology, 33, 685–703. https://doi.org/10.1007/s13347-020-00415-6
Christian, B. (2020). The Alignment Problem. W.W. Norton and Company.
Colaizzi, P. (1978). Psychological research as a phenomenologist sees it. In R. S. Valle & M. King (Eds.), Existential Phenomenological Alternatives for Psychology (pp. 48–71). Open University Press.
Cruz, C. C. (2021). Decolonizing philosophy of technology: Learning from bottom-up and top-down approaches to decolonial technical design. Philosophy and Technology, 34, 1847–1881.
Deininger, M., Daly, S. R., Lee, J. C., Seifert C. M., & Sienko, K. H. (2019). Prototyping for context: Exploring stakeholder feedback based on prototype type, stakeholder group and question type. Research in Engineering Design, 4, 453–471. https://doi.org/10.1007/s00163-019-00317-5
Durant, J. (1999). Participatory technology assessment and the democratic model of the public understanding of science. Science and Public Policy, 26(5), 313–319. https://doi.org/10.3152/147154399781782329
Dussel, Enrique. (2012). 1492. Buenos Aires: Editorial Docencia.
Ehn, P. (2017). Scandinavian design: On participation and skill. In D. Schuler & A. Namioka (Eds.), Participatory Design: Principles and Practices (pp. 41–77). CRC Press.
Ezekilov, J. (2011). Correcting 60 years of development failure: The potential of scaling up in addressing development ineffectiveness. Independent Study Project (ISP) Collection, 1083. https://digitalcollections.sit.edu/isp_collection/1083. Accessed Jan 2022.
Gadamer, H.-G. (1996). The Enigma of Health. Stanford University Press.
Gillespie, R. (1991). Manufacturing Knowledge: A History of the Hawthorne Experiments. Cambridge University Press.
Guendelsberber, E. (2019). On the clock: What low-wage work did to me and how it drives America crazy. Time, July 18. https://time.com/5629233/amazon-warehouse-employee-treatment-robots/. Accessed jan 2022.
Heidegger, M. (1987). The Question Concerning Technology and Other Essays. Harper and Row.
Heigl, F., Keislinger, B., Paul, K. T., Uhlik, J., & Döerler, D. (2019). Toward an international definition of citizen science. PNAS, 116(17): 8089–92. https://www.pnas.org/doi/pdf/10.1073/pnas.1903393116. Accessed Jan 2022.
Hennen, L. (2012). Why do we still need participatory technology assessment. Poieses and Praxis, 9, 2741. https://doi.org/10.1007/s10202-012-0122-5
Hughes, J., & Sharrock, W. (1997). The Philosophy of Social Research (3rd ed.). Longman.
IBM Global AI Adoption Index. (2022). IBM Corporation. https://www.ibm.com/downloads/cas/GVAGA3JP. Accessed Jan 2022.
Jonas, H. (1982). Technology as a subject for ethics. Social Research, 49(4), 891–898.
Lepri, B., Oliver, N., & Pentland, A. (2021). Ethical machines: The human-centric use of artificial intelligence. Science, 24 (March 19). Retrieved from https://pdf.sciencedirectassets.com/318494/1-s2.0-S2589004221X0003X/1-s2.0-S2589004221002170/main.pdf?X-Amz-Security-Token=IQoJb3JpZ2luX2VjENX%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaCXVzLWVhc3QtMSJGMEQCIFYF08Yz%2FE5KepfR2Mx. Accessed Jan 2022.
Livingstone, J. (2018). The profound alienation of the Amazon worker. The New Republic, November 27. Newrepublic.com/article. Accessed Jan 2022.
Marcus, G., & Davis, E. (2019). Rebooting AI: Building Artificial Intelligence We Can T r u s t. Pantheon Books. Apple Books.
Memmi, A. (1967). The Colonizer and the Colonized. Beacon Press.
Memmi, A. (1968). Dominated Man. Orion Press.
Merrett, F. (2006). Reflections on the Hawthorne effect. Educational Psychology, 26(1), 143–146.
Mhlambi, S. (2022). HAI seminar: Decolonizing AI. Stanford University, January 26, 2022. https://Stanford.Zoom.us/webinar/register/wnt7BevycyQ_aD9_gdrEF9Hg
Mitchell, M. (2019). Artificial Intelligence: A Guide for Thinking Humans. Farrar, Straus, and Giroux.
Mohamed, S., Png, M.-T., & Isaac, W. (2020). Decolonial AI: Decolonial theory as socialtechnical foresight in artificial intelligence. Philosophy and Technology. https://doi.org/10.1007/s13347-20-00405
Mohamed, S. (2018). Decolonial artificial intelligence. The Spectator, October. blog.shakirm.com/2018/10/decolonizing-artificial-intelligence. Accessed Jan 2022.
Mormina, M., & Pinder, S. (2018). A conceptual framework for training of trainers (ToT) interventions in global health. Global Health, 14, 100. https://doi.org/10.1186/s12992-018-0420-38
Murphy, J.M. (2014). Community-based interventions: Philosophy and action. Springer.
Murphy, J. W., Evans, S. D., Minutti-Meza, M. A. (2022). Introduction: Participatory budgeting as community-based work. American Behavioral Scientist. https://doi.org/10.1177/00027642221086952
Murphy, J. W., & Largacha-Martínez, C. (2021). “Is it possible to create a responsible AI technology to be used and understood within workplaces and unblocked CEOs’ mindsets. AI and Culture. https://doi.org/10.1007/s00146-021-01316-8/152403
Nancy, J.-L. (2016). The Disavowed Community. Fordham University Press.
Noble, S. U. (2018). Algorithms of Oppression. NY University Press. NY.
O’Neil, C. (2016). Weapons of Math Destruction. NY Crown.
Russell, S. J. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Penguin.
Schutz, A., & Luckmann, T. (1973). The Structures of the Lifeworld. Northwestern University Press.
Slota, S. C. (2020). Designing across distributed agency: Values, participatory design, and building socially responsible AI. https://repositories.lib.utexas.edu/bitstream/handle/2152/82848/c42d8459-27b5-4d3b-b9d9-a227536b3999%20%281%29.pdf?sequence=2&isAllowed=y. Accessed Jan 2022.
Smith, B. H. (1988). Contingencies of Value. Harvard University Press.
Walsh, T. (2018). Expert and non-expert opinion about technological unemployment. International Journal of Automation and Computing, 15(5), 637–642.
Wei, X. U. (2019). Toward human-centered AI: A perspective from human-computer interaction. Interactions July-August. Interections.amc.org/archive/view/July-August-2019/toward-human-centered-ai. Accessed Jan 2022.
Weick, K. E. (2009). The Impermanent Organization. Wiley.
Winner, L. (1977). Autonomous Technology: Technics-Out-of-Control as a Theme in Political Thought. MIT Press.
Winograd, T. (2006). Shifting viewpoints: Artificial intelligence and human–computer interaction. Artificial Intelligence, 170(18), 1256–1258. https://doi.org/10.1016/j.artint.2006.10.011
Zhang, B. & Dafoe, A. (2019). Artificial Intelligence: American Attitude and Trends. Oxford University: Center for Governance of AI/Future of Humanity Institute. https://ssm.com/abstract=3312874. Accessed Jan 2022.
Funding
This article was supported by a Fellowship given by the Fulbright Commission (USA) and the Ministry of Science (Colombia) to Carlos Largacha-Martinez, as part of the “Visiting Scholar” Program, 2020–2021 cohort. Professor John W. Murphy acted as the representative of the host institution, the University of Miami.
Author information
Authors and Affiliations
Contributions
Authors John W. Murphy and Carlos Largacha-Martínez declare that they have done all the research needed for this scientific article.
Corresponding author
Ethics declarations
All the ideas expressed here do not compromise any of the people that helped us, neither the networks that supported us, neither the Fundación Universitaria del Área Andina, University of Miami, nor the Fulbright Commission (US-Colombia).
Ethics Approval, Consent to Participate, and Consent to Publish
Authors John W. Murphy and Carlos Largacha-Martínez declare that for this research, they did not interview any human being nor have any relation to animals, so no ethical approval was required, neither consent to participate, neither consent to publish.
Conflict of Interest
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Murphy, J.W., Largacha-Martínez, C. Decolonization of AI: a Crucial Blind Spot. Philos. Technol. 35, 102 (2022). https://doi.org/10.1007/s13347-022-00588-2
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s13347-022-00588-2