Abstract
With the advent of automated decision-making, governments have increasingly begun to rely on artificially intelligent algorithms to inform policy decisions across a range of domains of government interest and influence. The practice has not gone unnoticed among philosophers, worried about “algocracy”, and its ethical and political impacts. One of the chief issues of ethical and political significance raised by algocratic governance, so the argument goes, is the lack of transparency of algorithms. One of the best-known examples of philosophical analyses of algocracy is John Danaher’s “The threat of algocracy”, arguing that government by algorithm undermines political legitimacy. In this paper, I will treat Danaher’s argument as a springboard for raising additional questions about the connections between algocracy, comprehensibility, and legitimacy, especially in light of empirical results about what we can expect the voters and policymakers to know. The paper has the following structure: in Sect. 2, I introduce the basics of Danaher’s argument regarding algocracy. In Sect. 3 I argue that the algocratic threat to legitimacy has troubling implications for social justice. In Sect. 4, I argue that, nevertheless, there seem to be good reasons for governments to rely on algorithmic decision support systems. Lastly, I try to resolve the apparent tension between the findings of the two preceding Sections.