Skip to main content

Advertisement

Log in

Algorithmic and human decision making: for a double standard of transparency

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

Should decision-making algorithms be held to higher standards of transparency than human beings? The way we answer this question directly impacts what we demand from explainable algorithms, how we govern them via regulatory proposals, and how explainable algorithms may help resolve the social problems associated with decision making supported by artificial intelligence. Some argue that algorithms and humans should be held to the same standards of transparency and that a double standard of transparency is hardly justified. We give two arguments to the contrary and specify two kinds of situations for which higher standards of transparency are required from algorithmic decisions as compared to humans. Our arguments have direct implications on the demands from explainable algorithms in decision-making contexts such as automated transportation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. Zerilli et al. do not consider different reasons that might justify a double standard. One such reason might be that, unlike machines, human beings have a right to privacy and so are protected from intrusive forms of transparency. It may turn out that AI systems are perhaps not demanded to be transparent because the transparency of human decision making is overestimated – but because humans enjoy rights machines do not. Here, however, we will not develop this possibility any further.

  2. We suspect that our example generalises: whenever an artefact is malfunctioning due to a technical detail, design level explanations are called for. Otherwise we will not understand the artefact’s defective behavior.

  3. Zerilli et al. use the terms ‘action’ and ‘behavior’ interchangeably in their paper. For the purpose of this paper, we do the same.

  4. See Guidotti et al. (2018) for a survey about the methods of explainable AI and Kasirzadeh (2021) for a critical discussion.

References

  • Binns R, Van Kleek M, Veale M, Lyngs U, Zhao J, Shadbolt N (2018) ‘It’s reducing a human being to a percentage’: perceptions of justice in algorithmic decisions. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM New York, NY, USA

  • Corbett-Davies S, Pierson E, Feller A, Goel S, Huq A (2017) Algorithmic decision making and the cost of fairness. In: Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, p 797–806

  • Creel KA (2020) Transparency in complex computational systems. Philos Sci 87(4):568–589

    Article  MathSciNet  Google Scholar 

  • Davis RH, Edelman D, Gammerman A (1992) Machine-learning algorithms for credit-card applications. IMA J Manag Math 4(1):43–51

    Article  Google Scholar 

  • de Fine Licht K, de Fine Licht J (2020) Artificial intelligence, transparency, and public decision-making. AI Soc 1–10

  • Dennett DC (1987) The intentional stance. MIT Press

    Google Scholar 

  • Feller A, Pierson E, Corbett-Davies S, Goel S (2016) A computer program used for bail and sentencing decisions was labeled biased against blacks. It’s actually not that clear. The Washington Post, vol 17

  • Gonzalez MF, Capman JF, Oswald FL, Theys ER, Tomczak DL (2019) “Where’s the IO?” Artificial intelligence and machine learning in talent management systems. Pers Assess Decis 5(3):5

    Google Scholar 

  • Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, Pedreschi D (2018) A survey of methods for explaining black box models. ACM Comput Surv 51(5):1–42

    Article  Google Scholar 

  • Johnston P, Harris R (2019) The Boeing 737 MAX saga: lessons for software organizations. Softw Qual Prof 21(3):4–12

    Google Scholar 

  • Kasirzadeh A (2021) Reasons, values, stakeholders: a philosophical framework for explainable artificial intelligence. In: Proceedings of the 2021 ACM conference on Fairness, Accountability, and Transparency (FAccT 2021): 14

  • Obermeyer Z, Powers B, Vogeli C, Mullainathan S (2019) Dissecting racial bias in an algorithm used to manage the health of populations. Science 366(6464):447–453

    Article  Google Scholar 

  • Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1(5):206–215

    Article  Google Scholar 

  • Schroeder T (2005) Moral responsibility and tourette syndrome. Philos Phenomenol Res 71(1):106–123

    Article  Google Scholar 

  • Walmsley, J. (2020). Artificial intelligence and the value of transparency. AI Soc 1–11

  • Zerilli J, Knott A, Maclaurin J, Gavaghan C (2019) Transparency in algorithmic and human decision-making: is there a double standard? Philos Technol 32(4):661–683

    Article  Google Scholar 

Download references

Acknowledgements

We would like to express our deep gratitude to Alistair Knott, James Maclaurin, and Colin Gavaghan for valuable discussions and suggestions at the University of Otago, New Zealand. Special thanks go to John Zerilli for extensive comments on an earlier draft of this paper. Furthermore, we are very grateful for the opportunity to present this work at a seminar of the Humanising Machine Intelligence project at the Australian National University. We are thankful for insightful feedback from the members of the project, in particular Seth Lazar, Sylvie Thiebaux, Damian Clifford, Pamela Robinson, and Jenny Davis. Finally, we would like to thank each other.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Atoosa Kasirzadeh.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Günther, M., Kasirzadeh, A. Algorithmic and human decision making: for a double standard of transparency. AI & Soc 37, 375–381 (2022). https://doi.org/10.1007/s00146-021-01200-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-021-01200-5

Keywords

Navigation