Philosophy and Technology 35 (4):1-20 (2022)
AbstractAdvancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation, I formulate a moral concern for opaque algorithms that is yet to receive a systematic treatment in the literature: when such algorithms are used in life-changing decisions, they can obstruct us from effectively shaping our lives according to our goals and preferences, thus undermining our autonomy. I argue that this concern deserves closer attention as it furnishes the call for transparency in algorithmic decision-making with both new tools and new challenges.
Similar books and articles
Computer Simulations, Machine Learning and the Laplacean Demon: Opacity in the Case of High Energy Physics.Florian J. Boge & Paul Grünke - forthcoming - In Andreas Kaminski, Michael Resch & Petra Gehring (eds.), The Science and Art of Simulation II.
Inductive Risk, Understanding, and Opaque Machine Learning Models.Emily Sullivan - forthcoming - Philosophy of Science:1-13.
How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms.Jenna Burrell - 2016 - Big Data and Society 3 (1):205395171562251.
The Value of Opacity: A Bakhtinian Analysis of Habermas's Discourse Ethics.T. Gregory Garvey - 2000 - Philosophy and Rhetoric 33 (4):370-390.
Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - 2019 - Philosophy and Technology 34 (2):265-288.
Scientific Exploration and Explainable Artificial Intelligence.Carlos Zednik & Hannes Boelsen - 2022 - Minds and Machines 32 (1):219-239.
The Limits of Value Transparency in Machine Learning.Rune Nyrup - forthcoming - Philosophy of Science:1-23.
Explainable Machine Learning Practices: Opening Another Black Box for Reliable Medical AI.Emanuele Ratti & Mark Graves - 2022 - AI and Ethics:1-14.
Algorithmic Decision-Making Based on Machine Learning From Big Data: Can Transparency Restore Accountability?Paul B. de Laat - 2018 - Philosophy and Technology 31 (4):525-541.
Algorithmic Decision-Making Based on Machine Learning From Big Data: Can Transparency Restore Accountability?Paul Laat - 2018 - Philosophy and Technology 31 (4):525-541.
Autonomy and Machine Learning as Risk Factors at the Interface of Nuclear Weapons, Computers and People.S. M. Amadae & Shahar Avin - 2019 - In Vincent Boulanin (ed.), The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk: Euro-Atlantic Perspectives. Stockholm, Sweden: pp. 105-118.
Added to PP
Historical graph of downloads
Citations of this work
No citations found.
References found in this work
Making Things Happen: A Theory of Causal Explanation.James Woodward - 2003 - Oxford University Press.
After Virtue: A Study in Moral Theory.Alasdair C. MacIntyre - 1983 - University of Notre Dame Press.
AI4People—an Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations.Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke & Effy Vayena - 2018 - Minds and Machines 28 (4):689-707.