In Karen Yeung & Martin Lodge (eds.), Algorithmic Regulation. Oxford: Oxford University Press (2019)

Authors
Teresa Scantamburlo
University of Venice
Nello Cristianini
University of Bristol (PhD)
Abstract
As we increasingly delegate decision-making to algorithms, whether directly or indirectly, important questions emerge in circumstances where those decisions have direct consequences for individual rights and personal opportunities, as well as for the collective good. A key problem for policymakers is that the social implications of these new methods can only be grasped if there is an adequate comprehension of their general technical underpinnings. The discussion here focuses primarily on the case of enforcement decisions in the criminal justice system, but draws on similar situations emerging from other algorithms utilised in controlling access to opportunities, to explain how machine learning works and, as a result, how decisions are made by modern intelligent algorithms or 'classifiers'. It examines the key aspects of the performance of classifiers, including how classifiers learn, the fact that they operate on the basis of correlation rather than causation, and that the term 'bias' in machine learning has a different meaning to common usage. An example of a real world 'classifier', the Harm Assessment Risk Tool (HART), is examined, through identification of its technical features: the classification method, the training data and the test data, the features and the labels, validation and performance measures. Four normative benchmarks are then considered by reference to HART: (a) prediction accuracy (b) fairness and equality before the law (c) transparency and accountability (d) informational privacy and freedom of expression, in order to demonstrate how its technical features have important normative dimensions that bear directly on the extent to which the system can be regarded as a viable and legitimate support for, or even alternative to, existing human decision-makers.
Keywords artificial intelligence  bias  fairness
Categories (categorize this paper)
Buy the book Find it on Amazon.com
Options
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

PhilArchive copy


Upload a copy of this paper     Check publisher's policy     Papers currently archived: 65,587
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

Weaving Technology and Policy Together to Maintain Confidentiality.Latanya Sweeney - 1997 - Journal of Law, Medicine and Ethics 25 (2-3):98-110.

Add more references

Citations of this work BETA

Add more citations

Similar books and articles

Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
Judicial Analytics and the Great Transformation of American Law.Daniel L. Chen - 2019 - Artificial Intelligence and Law 27 (1):15-42.
Explaining Explanations in AI.Brent Mittelstadt - forthcoming - FAT* 2019 Proceedings 1.
Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures.Daniel Susser - 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society 1.

Analytics

Added to PP index
2019-07-17

Total views
40 ( #273,503 of 2,461,943 )

Recent downloads (6 months)
7 ( #101,213 of 2,461,943 )

How can I increase my downloads?

Downloads

My notes