On prediction-modelers and decision-makers: why fairness requires more than a fair prediction model

AI and Society:1-17 (forthcoming)
  Copy   BIBTEX

Abstract

An implicit ambiguity in the field of prediction-based decision-making concerns the relation between the concepts of prediction and decision. Much of the literature in the field tends to blur the boundaries between the two concepts and often simply refers to ‘fair prediction’. In this paper, we point out that a differentiation of these concepts is helpful when trying to implement algorithmic fairness. Even if fairness properties are related to the features of the used prediction model, what is more properly called ‘fair’ or ‘unfair’ is a decision system, not a prediction model. This is because fairness is about the consequences on human lives, created by a decision, not by a prediction. In this paper, we clarify the distinction between the concepts of prediction and decision and show the different ways in which these two elements influence the final fairness properties of a prediction-based decision system. As well as discussing this relationship both from a conceptual and a practical point of view, we propose a framework that enables a better understanding and reasoning of the conceptual logic of creating fairness in prediction-based decision-making. In our framework, we specify different roles, namely the ‘prediction-modeler’ and the ‘decision-maker,’ and the information required from each of them for being able to implement fairness of the system. Our framework allows for deriving distinct responsibilities for both roles and discussing some insights related to ethical and legal requirements. Our contribution is twofold. First, we offer a new perspective shifting the focus from an abstract concept of algorithmic fairness to the concrete context-dependent nature of algorithmic decision-making, where different actors exist, can have different goals, and may act independently. In addition, we provide a conceptual framework that can help structure prediction-based decision problems with respect to fairness issues, identify responsibilities, and implement fairness governance mechanisms in real-world scenarios.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,475

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Call for papers.[author unknown] - 2018 - AI and Society 33 (3):453-455.
AI is a ruler not a helper.Z. Liu - forthcoming - AI and Society:1-2.
AI and consciousness.Sam S. Rakover - forthcoming - AI and Society:1-2.
Call for papers.[author unknown] - 2018 - AI and Society 33 (3):457-458.
Big Data.Xin Wei Sha & Gabriele Carotti-Sha - 2023 - AI and Society 38 (6):2705-2708.
What do we do with knowledge?Chinmoy Goswami - 2007 - AI and Society 21 (1-2):47-56.
The age of machinoids.Gabriel Lanyi - forthcoming - AI and Society:1-2.
The inside out mirror.Sue Pearson - 2021 - AI and Society 36 (3):1069-1070.
Is LaMDA sentient?Max Griffiths - forthcoming - AI and Society:1-2.
Hermeneutic of performing cultures.Arun Kumar Tripathi - 2023 - AI and Society 38 (6):2125-2132.
Testing Turing.Irving Massey - 2023 - AI and Society 38 (5):1969-1970.
The dissolution of the condicio humana.Tim Rein - 2023 - AI and Society 38 (5):1967-1968.

Analytics

Added to PP
2024-03-17

Downloads
16 (#898,367)

6 months
16 (#153,120)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Joachim Baumann
University of Zürich
Teresa Scantamburlo
University of Venice

Citations of this work

No citations found.

Add more citations

References found in this work

The wrongs of racist beliefs.Rima Basu - 2018 - Philosophical Studies 176 (9):2497-2515.
On statistical criteria of algorithmic fairness.Brian Hedden - 2021 - Philosophy and Public Affairs 49 (2):209-231.
Human autonomy, technological automation.Simona Chiodo - 2022 - AI and Society 37 (1):39-48.

View all 10 references / Add more references