Algorithmic Decision-making, Statistical Evidence and the Rule of Law

Episteme:1-24 (forthcoming)
  Copy   BIBTEX

Abstract

The rapidly increasing role of automation throughout the economy, culture and our personal lives has generated a large literature on the risks of algorithmic decision-making, particularly in high-stakes legal settings. Algorithmic tools are charged with bias, shrouded in secrecy, and frequently difficult to interpret. However, these criticisms have tended to focus on particular implementations, specific predictive techniques, and the idiosyncrasies of the American legal-regulatory regime. They do not address the more fundamental unease about the prospect that we might one day replace judges with algorithms, no matter how fair, transparent, and intelligible they become. The aim of this paper is to propose an account of the source of that unease, and to evaluate its plausibility. I trace foundational unease with algorithmic decision-making in the law to the powerful intuition that there is a basic moral and legal difference between showing that something is true of many people just like you and showing that it is true of you. Human judgment attends to the exception; automation insists on blindly applying the rule. I show how this intuitive thought is connected to both epistemological arguments about the value of statistical evidence, as well as to court-centered conceptions of the rule of law. Unease with algorithmic decision-making in the law thus draws on an intuitive principle that underpins a disparate range of views in legal philosophy. This suggests the principle is deeply ingrained. Nonetheless, I argue that the powerful intuition is not as decisive as it may seem, and indeed runs into significant epistemological and normative challenges. At an epistemological level, I show how concerns about statistical evidence's ability to track the truth can be resolved by adopting a probabilistic, rather than modal, conception of truth-tracking. At a normative level, commitment to highly individualized decision-making co-exists with equally ingrained and competing principles, such as consistent application of law. This suggests that the “rule of law” may not identify a discrete set of institutional arrangements, as proponents of a court-centric conception would have it, but rather a more loosely defined set of values that could potentially be operationalized in multiple ways, including through some level of algorithmic adjudication. Although the prospect of replacing judges with algorithms is indeed unsettling, it does not necessarily entail unreasonable verdicts or an attack on the rule of law.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,252

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Algorithmisches Entscheiden, Ambiguitätstoleranz und die Frage nach dem Sinn.Lisa Herzog - 2021 - Deutsche Zeitschrift für Philosophie 69 (2):197-213.
Algorithmic Accountability and Public Reason.Reuben Binns - 2018 - Philosophy and Technology 31 (4):543-556.

Analytics

Added to PP
2023-05-30

Downloads
29 (#524,468)

6 months
13 (#169,369)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

No citations found.

Add more citations

References found in this work

How to defeat opposition to Moore.Ernest Sosa - 1999 - Philosophical Perspectives 13:137-49.
Rehabilitating Statistical Evidence.Lewis Ross - 2019 - Philosophy and Phenomenological Research 102 (1):3-23.

View all 15 references / Add more references