Machine learning’s limitations in avoiding automation of bias

AI and Society 36 (1):197-203 (2021)
  Copy   BIBTEX

Abstract

The use of predictive systems has become wider with the development of related computational methods, and the evolution of the sciences in which these methods are applied Solon and Selbst and Pedreschi et al.. The referred methods include machine learning techniques, face and/or voice recognition, temperature mapping, and other, within the artificial intelligence domain. These techniques are being applied to solve problems in socially and politically sensitive areas such as crime prevention and justice management, crowd management, and emotion analysis, just to mention a few. However, dissimilar predictions can be found nowadays as the result of the application of these methods resulting in misclassification, for example for the case of conviction risk assessment Office of Probation and Pretrial Services or decision-making process when designing public policies Lange. The goal of this paper is to identify current gaps on fairness achievement within the context of predictive systems in artificial intelligence by analyzing available academic and scientific literature up to 2020. To achieve this goal, we have gathered available materials at the Web of Science and Scopus from last 5 years and analyzed the different proposed methods and their results in relation to the bias as an emergent issue in the Artificial Intelligence field of study. Our tentative conclusions indicate that machine learning has some intrinsic limitations which are leading to automate the bias when designing predictive algorithms. Consequently, other methods should be explored; or we should redefine the way current machine learning approaches are being used when building decision making/decision support systems for crucial institutions of our political systems such as the judicial system, just to mention one.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,386

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Learning robots and human responsibility.Dante Marino & Guglielmo Tamburrini - 2006 - International Review of Information Ethics 6:46-51.
Model theory and machine learning.Hunter Chase & James Freitag - 2019 - Bulletin of Symbolic Logic 25 (3):319-332.

Analytics

Added to PP
2020-06-03

Downloads
72 (#224,393)

6 months
17 (#142,329)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

The wiseman in the mirror.Karl Kristian Larsson - 2021 - AI and Society 36 (3):1071-1072.

Add more citations