Online Extremism, AI, and (Human) Content Moderation
DOI:
https://doi.org/10.5206/fpq/2022.3/4.14295Keywords:
artificial intelligence, social media, content moderation, online extremism, radicalization, propagandaAbstract
This paper has three main goals: (1) to clarify the role of artificial intelligence (AI)—along with algorithms more broadly—in online radicalization that results in “real world violence,” (2) to argue that technological solutions (like better AI) are inadequate proposals for this problem given both technical and social reasons, and (3) to demonstrate that platform companies’ (e.g., Meta, Google) statements of preference for technological solutions functions as a type of propaganda that serves to erase the work of the thousands of human content moderators and to conceal the harms they experience. I argue that the proper assessment of these important, related issues must be free of the obfuscation that the “better AI” proposal generates. For this reason, I describe the AI-centric solutions favoured by major platform companies as a type of obfuscating and dehumanizing propaganda.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2022 Michael Randall Barnes
This work is licensed under a Creative Commons Attribution 4.0 International License.
The authors of work published in FPQ under the Creative Commons CC BY 4.0 License retain copyright to their work without restrictions and publication rights without restrictions. However, we request that authors include some sort of acknowledgement that the work was previously published in FPQ if part or all of a paper published in FPQ is used elsewhere.