Abstract
This paper has 3 main goals: (1) to clarify the role of Artificial Intelligence (AI)—along with algorithms more broadly—in online radicalization that results in ‘real world violence’; (2) to argue that technological solutions (like better AI) are inadequate proposals for this problem given both technical and social reasons; and (3) to demonstrate that platform companies’ (e.g., Meta, Google) statements of preference for technological solutions functions as a type of propaganda that serves to erase the work of the thousands of human content moderators and conceal the harms they experience. I argue that the proper assessment of these important, related issues must be free of the obfuscation that the ‘better AI’ proposal generates. For this reason, I describe the AI-centric solutions favoured by major platform companies as a type of obfuscating and dehumanizing propaganda.