Abstract
Some recent developments in Artificial Intelligence—especially the use of machine learning systems, trained on big data sets and deployed in socially significant and ethically weighty contexts—have led to a number of calls for “transparency”. This paper explores the epistemological and ethical dimensions of that concept, as well as surveying and taxonomising the variety of ways in which it has been invoked in recent discussions. Whilst “outward” forms of transparency (concerning the relationship between an AI system, its developers, users and the media) may be straightforwardly achieved, what I call “functional” transparency about the inner workings of a system is, in many cases, much harder to attain. In those situations, I argue that contestability may be a possible, acceptable, and useful alternative so that even if we cannot understand how a system came up with a particular output, we at least have the means to challenge it.
Similar content being viewed by others
Notes
General Data Protection Regulation, Recital 71, available at https://gdpr-info.eu/recitals/no-71/.
See Dennett (1971).
Developed by Northpointe (now renamed Equivant); the acronym stands for “Correctional Offender Management Profiling for Alternative Sanctions”.
Fricker (2007), p. 20.
Here, I have in mind examples such as the “GP at Hand” system developed by Babylon Health, which provides some NHS services in the UK. See https://www.gpathand.nhs.uk/.
Fricker (2007), p. 1.
Elliott (2017), p. 171.
The nascent sub-discipline of “explainable AI” (or xAI) is especially focussed on this kind of transparency.
I thank an anonymous reviewer for drawing this point—about the parallel type/token distinction at the level of problematic discrimination—to my attention.
See Angwin et al. (2016) for more detail and the actual COMPAS questionnaire used, here: https://www.documentcloud.org/documents/2702103-Sample-Risk-Assessment-COMPAS-CORE.html.
See this recent comic for a similar—if light-hearted—idea of type functional transparency with respect to machine learning based hiring algorithms, like that discussed in Sect. 2: https://xkcd.com/2237/.
Hempel (1965), p. 238.
The European Commission guidelines, for example, recommend that “… the option to decide against this interaction, in favour of human interaction should be provided.” (p. 18).
I am grateful to an anonymous reviewer for making this suggestion.
Indeed, this is a further strategy that Hirsch et al. (2017) recommend for the design and pilot phase of ML systems in the field of mental health diagnostics. They frame it as a mechanism for improving the accuracy of such systems, but this use of feedback could clearly also underlie the development of systems that are (perceived as) fair or just.
References
Almada M (2019) Human intervention in automated decision-making: toward the construction of contestable systems. In Seventeenth International Conference on Artificial Intelligence and Law (ICAIL ’19), June 17–21, 2019, ACM, Montreal, New York. https://dl.acm.org/doi/10.1145/3322640.3326699
Angwin J, Larson J, Mattu S, Kirchner L (2016) Machine bias: there’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica, 23 May 2016. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Angwin J, Larson J (2016) Bias in criminal risk scores is mathematically inevitable, researchers say. ProPublica, 30 December 2016. https://www.propublica.org/article/bias-in-criminal-risk-scores-is-mathematically-inevitable-researchers-say
Binns R, Van Kleek M, Veale M, Lyngs U, Zhao J, Shadbolt N (2018) It’s reducing a human being to a percentage: perceptions of justice in algorithmic decisions. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, Paper no: 377, pp 1–4. https://doi.org/10.1145/3173574.3173951
Boden M, Bryson J, Caldwell D, Dautenhahn K, Edwards L, Kember S, Newman P, Parry V, Pegman G, Rodden T, Sorrell T, Wallis M, Whitby B, Winfield A (2017) Principles of robotics: regulating robots in the real world. Connect Sci 29(2):124–129
Buranyi S (2018) How to persuade a robot that you should get the job. The Guardian, 4th March, 2018. https://www.theguardian.com/technology/2018/mar/04/robots-screen-candidates-for-jobs-artificial-intelligence
Clarke AC (1972/2013) Profiles of the future, 2nd edn. Hachette UK, London
Danaher J (2019) Automation and Utopia: human flourishing in a world without work. Harvard University Press, Cambridge
Dennett DC (1971) Intentional systems. J Philos 68(4):87–106
Dietvorst BJ, Simmons JP, Massey C (2015) Algorithm aversion: People erroneously avoid algorithms after seeing them err. J Exp Psychol Gen 144(1):114–126
Dressel J, Farid H (2018) The accuracy, fairness, and limits of predicting recidivism. Sci Adv 4(1):eaao5880. https://doi.org/10.1126/sciadv.aao5580
Dutta S, Wei D, Yueksel H, Chen P-Y, Liu S, Varshney KR (2020) Is there a trade-off between fairness and accuracy? A perspective using mismatched hypothesis testing. In: Proceedings of the 37 th International Conference on Machine Learning. Vienna. https://proceedings.icml.cc/static/paper_files/icml/2020/2831-Paper.pdf(PMLR 119, 2020)
European Commission High-Level Expert Group on Artificial Intelligence (2019) Ethics guidelines for trustworthy AI. Brussels: European Commission. https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419
Elliott KC (2017) A tapestry of values: an introduction to values in science. Oxford University Press, Oxford
Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Luetge C, Madelin R, Pagallo U, Rossi F, Schafer B, Valcke P, Vayena E (2018) AI4People—an ethical framework for a good ai society: opportunities, risks, principles, and recommendations. Minds Mach 28:689–707
Fricker M (2007) Epistemic injustice: power and the ethics of knowing. Oxford University Press, Oxford
Haugeland J (1985) Artificial intelligence: the very idea. MIT Press, Cambridge
Hempel C (1965) “The Function of General Laws in History” in his aspects of scientific explanation and other essays in the philosophy of science. Free Press, New York
Hirsch T, Mercen K, Narayanan S, Imel ZE, Atkins DC (2017) Designing contestability: interaction design, machine learning, and mental health. In: DIS '17: Proceedings of the 2017 Conference on Designing Interactive Systems, pp 95–99. https://doi.org/10.1145/3064663.3064703
Kaas M (2020) Raising ethical machines: bottom-up methods for implementing machine ethics. In: Thompson SJ (ed) Machine Law, Ethics, and Morality in the Age of Artificial Intelligence. IGI Global Press, Hershey (forthcoming)
Kleinberg J, Mullainathan S, Raghavan M (2016) Inherent trade-offs in the fair determination of risk scores. https://arxiv.org/pdf/1609.05807.pdf
Lawton G (2019) Simulating the World. New Scientist, 5 October 2019, vol 3250, pp 38–41. https://www.newscientist.com/article/mg24332500-800-ai-can-predict-your-future-behaviour-with-powerful-new-simulations/
Logg JM, Minson JA, Moore DA (2019) Algorithm appreciation: people prefer algorithmic to human judgment. Organ Behav Human Decis Process 151:90–103
Marcus G, Davis E (2019) Rebooting AI: building artificial intelligence we can trust. Pantheon, New York
Mulligan DK, Kluttz D, Kohli N (2019) Shaping our tools: contestability as a means to promote responsible algorithmic decision making in the professions (July 7, 2019). https://doi.org/10.2139/ssrn.3311894
Robbins S (2019) A misdirected principle with a catch: explicability for AI. Mind Mach 29(4):495–514
Smith BC (2019) The promise of artificial intelligence: reckoning and judgment. MIT Press, Cambridge
Sutton RS, Barto AG (2018) Reinforcement Learning: An Introduction, 2nd edn. MIT Press, Cambridge
Zerilli J, Knott A, Maclaurin J, Gavaghan C (2018) Transparency in algorithmic and human decision-making: Is there a double standard? Philos Technol 32:661–683
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Walmsley, J. Artificial intelligence and the value of transparency. AI & Soc 36, 585–595 (2021). https://doi.org/10.1007/s00146-020-01066-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-020-01066-z