Online Extremism, AI, and (Human) Content Moderation

Feminist Philosophy Quarterly 8 (3/4) (2022)
  Copy   BIBTEX

Abstract

This paper has 3 main goals: (1) to clarify the role of Artificial Intelligence (AI)—along with algorithms more broadly—in online radicalization that results in ‘real world violence’; (2) to argue that technological solutions (like better AI) are inadequate proposals for this problem given both technical and social reasons; and (3) to demonstrate that platform companies’ (e.g., Meta, Google) statements of preference for technological solutions functions as a type of propaganda that serves to erase the work of the thousands of human content moderators and conceal the harms they experience. I argue that the proper assessment of these important, related issues must be free of the obfuscation that the ‘better AI’ proposal generates. For this reason, I describe the AI-centric solutions favoured by major platform companies as a type of obfuscating and dehumanizing propaganda.

Other Versions

No versions found

Similar books and articles

Content moderation, AI, and the question of scale.Tarleton Gillespie - 2020 - Big Data and Society 7 (2):2053951720943234.
Automated Influence and the Challenge of Cognitive Security.Sarah Rajtmajer & Daniel Susser - forthcoming - HoTSoS: ACM Symposium on Hot Topics in the Science of Security.
Towards an Epistemic Compass for Online Content Moderation.Abraham Tobi - 2024 - Philosophy and Technology 37 (3):1-20.

Analytics

Added to PP
2022-07-29

Downloads
293 (#84,757)

6 months
25 (#120,050)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Michael Randall Barnes
University of Notre Dame

Citations of this work

Freedom of speech.David van Mill - 2008 - Stanford Encyclopedia of Philosophy.
Freedom of Speech.D. V. Mill - forthcoming - Stanford Encyclopedia of Philosophy.

Add more citations