A comparative user study of human predictions in algorithm-supported recidivism risk assessment

Artificial Intelligence and Law:1-47 (forthcoming)
  Copy   BIBTEX

Abstract

In this paper, we study the effects of using an algorithm-based risk assessment instrument (RAI) to support the prediction of risk of violent recidivism upon release. The instrument we used is a machine learning version of RiskCanvi used by the Justice Department of Catalonia, Spain. It was hypothesized that people can improve their performance on defining the risk of recidivism when assisted with a RAI. Also, that professionals can perform better than non-experts on the domain. Participants had to predict whether a person who has been released from prison will commit a new crime leading to re-incarceration, within the next two years. This user study is done with (1) general participants from diverse backgrounds recruited through a crowdsourcing platform, (2) targeted participants who are students and practitioners of data science, criminology, or social work and professionals who work with RisCanvi. We also run focus groups with participants of the targeted study, including people who use RisCanvi in a professional capacity, to interpret the quantitative results. Among other findings, we observe that algorithmic support systematically leads to more accurate predictions from all participants, but that statistically significant gains are only seen in the performance of targeted participants with respect to that of crowdsourced participants. Among other comments, professional participants indicate that they would not foresee using a fully-automated system in criminal risk assessment, but do consider it valuable for training, standardization, and to fine-tune or double-check their predictions on particularly difficult cases. We found that the revised prediction by using a RAI increases the performance of all groups, while professionals show a better performance in general. And, a RAI can be considered for extending professional capacities and skills along their careers.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,475

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Index of Key Words.[author unknown] - 1997 - Artificial Intelligence and Law 5 (4):347-347.
Instructions for Authors.[author unknown] - 2004 - Artificial Intelligence and Law 12 (4):447-452.
Instructions for Authors.[author unknown] - 2002 - Artificial Intelligence and Law 10 (4):303-308.
Instructions for Authors.[author unknown] - 2002 - Artificial Intelligence and Law 10 (1):219-224.
Instructions for Authors.[author unknown] - 2001 - Artificial Intelligence and Law 9 (4):315-320.
Correction to: Reasoning with inconsistent precedents.Ilaria Canavotto - forthcoming - Artificial Intelligence and Law:1-4.
Editors' introduction.Henry Prakken & Giovanni Sartor - 1996 - Artificial Intelligence and Law 4 (3-4):157-161.
Assessment criteria or standards of proof? An effort in clarification.Giovanni Tuzet - 2020 - Artificial Intelligence and Law 28 (1):91-109.

Analytics

Added to PP
2024-03-16

Downloads
12 (#1,075,977)

6 months
12 (#207,528)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references