Skip to main content
Log in

Artificial intelligence and the value of transparency

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

Some recent developments in Artificial Intelligence—especially the use of machine learning systems, trained on big data sets and deployed in socially significant and ethically weighty contexts—have led to a number of calls for “transparency”. This paper explores the epistemological and ethical dimensions of that concept, as well as surveying and taxonomising the variety of ways in which it has been invoked in recent discussions. Whilst “outward” forms of transparency (concerning the relationship between an AI system, its developers, users and the media) may be straightforwardly achieved, what I call “functional” transparency about the inner workings of a system is, in many cases, much harder to attain. In those situations, I argue that contestability may be a possible, acceptable, and useful alternative so that even if we cannot understand how a system came up with a particular output, we at least have the means to challenge it.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Sometimes also discussed under the heading of “explainability,” “explicability” or “understandability” (e.g., by Robbins 2019) or with reference, also, to “accountability,” “intelligibility” and “interpretability” (e.g., in Floridi et al. 2018).

  2. General Data Protection Regulation, Recital 71, available at https://gdpr-info.eu/recitals/no-71/.

  3. See Dennett (1971).

  4. https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/.

  5. https://www.sciencemag.org/news/2018/05/ai-researchers-allege-machine-learning-alchemy or https://www.youtube.com/watch?v=oXAEL8IjUlo.

  6. https://www.sapiens.org/column/machinations/ai-as-magic/.

  7. https://www.forbes.com/sites/jasonbloomberg/2018/09/16/dont-trust-artificial-intelligence-time-to-open-the-ai-black-box.

  8. https://oecdobserver.org/news/fullstory.php/aid/5543/A_mystery_in_the_machine.html.

  9. See https://www.bbc.com/news/technology-45809919.

  10. Developed by Northpointe (now renamed Equivant); the acronym stands for “Correctional Offender Management Profiling for Alternative Sanctions”.

  11. Fricker (2007), p. 20.

  12. Here, I have in mind examples such as the “GP at Hand” system developed by Babylon Health, which provides some NHS services in the UK. See https://www.gpathand.nhs.uk/.

  13. Fricker (2007), p. 1.

  14. Elliott (2017), p. 171.

  15. See here: https://twitter.com/newscientist/status/1180916793126326273.

  16. The nascent sub-discipline of “explainable AI” (or xAI) is especially focussed on this kind of transparency.

  17. I thank an anonymous reviewer for drawing this point—about the parallel type/token distinction at the level of problematic discrimination—to my attention.

  18. See Angwin et al. (2016) for more detail and the actual COMPAS questionnaire used, here: https://www.documentcloud.org/documents/2702103-Sample-Risk-Assessment-COMPAS-CORE.html.

  19. See this recent comic for a similar—if light-hearted—idea of type functional transparency with respect to machine learning based hiring algorithms, like that discussed in Sect. 2: https://xkcd.com/2237/.

  20. Hempel (1965), p. 238.

  21. The European Commission guidelines, for example, recommend that “… the option to decide against this interaction, in favour of human interaction should be provided.” (p. 18).

  22. I am grateful to an anonymous reviewer for making this suggestion.

  23. Indeed, this is a further strategy that Hirsch et al. (2017) recommend for the design and pilot phase of ML systems in the field of mental health diagnostics. They frame it as a mechanism for improving the accuracy of such systems, but this use of feedback could clearly also underlie the development of systems that are (perceived as) fair or just.

  24. See: https://en.wikipedia.org/wiki/Computer_says_no.

References

  • Almada M (2019) Human intervention in automated decision-making: toward the construction of contestable systems. In Seventeenth International Conference on Artificial Intelligence and Law (ICAIL ’19), June 17–21, 2019, ACM, Montreal, New York. https://dl.acm.org/doi/10.1145/3322640.3326699

  • Angwin J, Larson J, Mattu S, Kirchner L (2016) Machine bias: there’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica, 23 May 2016. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

  • Angwin J, Larson J (2016) Bias in criminal risk scores is mathematically inevitable, researchers say. ProPublica, 30 December 2016. https://www.propublica.org/article/bias-in-criminal-risk-scores-is-mathematically-inevitable-researchers-say

  • Binns R, Van Kleek M, Veale M, Lyngs U, Zhao J, Shadbolt N (2018) It’s reducing a human being to a percentage: perceptions of justice in algorithmic decisions. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, Paper no: 377, pp 1–4. https://doi.org/10.1145/3173574.3173951

  • Boden M, Bryson J, Caldwell D, Dautenhahn K, Edwards L, Kember S, Newman P, Parry V, Pegman G, Rodden T, Sorrell T, Wallis M, Whitby B, Winfield A (2017) Principles of robotics: regulating robots in the real world. Connect Sci 29(2):124–129

    Article  Google Scholar 

  • Buranyi S (2018) How to persuade a robot that you should get the job. The Guardian, 4th March, 2018. https://www.theguardian.com/technology/2018/mar/04/robots-screen-candidates-for-jobs-artificial-intelligence

  • Clarke AC (1972/2013) Profiles of the future, 2nd edn. Hachette UK, London

  • Danaher J (2019) Automation and Utopia: human flourishing in a world without work. Harvard University Press, Cambridge

    Book  Google Scholar 

  • Dennett DC (1971) Intentional systems. J Philos 68(4):87–106

    Article  Google Scholar 

  • Dietvorst BJ, Simmons JP, Massey C (2015) Algorithm aversion: People erroneously avoid algorithms after seeing them err. J Exp Psychol Gen 144(1):114–126

    Article  Google Scholar 

  • Dressel J, Farid H (2018) The accuracy, fairness, and limits of predicting recidivism. Sci Adv 4(1):eaao5880. https://doi.org/10.1126/sciadv.aao5580

  • Dutta S, Wei D, Yueksel H, Chen P-Y, Liu S, Varshney KR (2020) Is there a trade-off between fairness and accuracy? A perspective using mismatched hypothesis testing. In: Proceedings of the 37 th International Conference on Machine Learning. Vienna. https://proceedings.icml.cc/static/paper_files/icml/2020/2831-Paper.pdf(PMLR 119, 2020)

  • European Commission High-Level Expert Group on Artificial Intelligence (2019) Ethics guidelines for trustworthy AI. Brussels: European Commission. https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419

  • Elliott KC (2017) A tapestry of values: an introduction to values in science. Oxford University Press, Oxford

    Book  Google Scholar 

  • Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Luetge C, Madelin R, Pagallo U, Rossi F, Schafer B, Valcke P, Vayena E (2018) AI4People—an ethical framework for a good ai society: opportunities, risks, principles, and recommendations. Minds Mach 28:689–707

    Article  Google Scholar 

  • Fricker M (2007) Epistemic injustice: power and the ethics of knowing. Oxford University Press, Oxford

    Book  Google Scholar 

  • Haugeland J (1985) Artificial intelligence: the very idea. MIT Press, Cambridge

    Google Scholar 

  • Hempel C (1965) “The Function of General Laws in History” in his aspects of scientific explanation and other essays in the philosophy of science. Free Press, New York

    Google Scholar 

  • Hirsch T, Mercen K, Narayanan S, Imel ZE, Atkins DC (2017) Designing contestability: interaction design, machine learning, and mental health. In: DIS '17: Proceedings of the 2017 Conference on Designing Interactive Systems, pp 95–99. https://doi.org/10.1145/3064663.3064703

  • Kaas M (2020) Raising ethical machines: bottom-up methods for implementing machine ethics. In: Thompson SJ (ed) Machine Law, Ethics, and Morality in the Age of Artificial Intelligence. IGI Global Press, Hershey (forthcoming)

    Google Scholar 

  • Kleinberg J, Mullainathan S, Raghavan M (2016) Inherent trade-offs in the fair determination of risk scores. https://arxiv.org/pdf/1609.05807.pdf

  • Lawton G (2019) Simulating the World. New Scientist, 5 October 2019, vol 3250, pp 38–41. https://www.newscientist.com/article/mg24332500-800-ai-can-predict-your-future-behaviour-with-powerful-new-simulations/

  • Logg JM, Minson JA, Moore DA (2019) Algorithm appreciation: people prefer algorithmic to human judgment. Organ Behav Human Decis Process 151:90–103

    Article  Google Scholar 

  • Marcus G, Davis E (2019) Rebooting AI: building artificial intelligence we can trust. Pantheon, New York

    Google Scholar 

  • Mulligan DK, Kluttz D, Kohli N (2019) Shaping our tools: contestability as a means to promote responsible algorithmic decision making in the professions (July 7, 2019). https://doi.org/10.2139/ssrn.3311894

  • Robbins S (2019) A misdirected principle with a catch: explicability for AI. Mind Mach 29(4):495–514

    Article  Google Scholar 

  • Smith BC (2019) The promise of artificial intelligence: reckoning and judgment. MIT Press, Cambridge

    Book  Google Scholar 

  • Sutton RS, Barto AG (2018) Reinforcement Learning: An Introduction, 2nd edn. MIT Press, Cambridge

    MATH  Google Scholar 

  • Zerilli J, Knott A, Maclaurin J, Gavaghan C (2018) Transparency in algorithmic and human decision-making: Is there a double standard? Philos Technol 32:661–683

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Joel Walmsley.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Walmsley, J. Artificial intelligence and the value of transparency. AI & Soc 36, 585–595 (2021). https://doi.org/10.1007/s00146-020-01066-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-020-01066-z

Keywords

Navigation