Algorithmic and human decision making: for a double standard of transparency

AI and Society 37 (1):375-381 (2022)
  Copy   BIBTEX


Should decision-making algorithms be held to higher standards of transparency than human beings? The way we answer this question directly impacts what we demand from explainable algorithms, how we govern them via regulatory proposals, and how explainable algorithms may help resolve the social problems associated with decision making supported by artificial intelligence. Some argue that algorithms and humans should be held to the same standards of transparency and that a double standard of transparency is hardly justified. We give two arguments to the contrary and specify two kinds of situations for which higher standards of transparency are required from algorithmic decisions as compared to humans. Our arguments have direct implications on the demands from explainable algorithms in decision-making contexts such as automated transportation.

Similar books and articles

AI, Opacity, and Personal Autonomy.Bram Vaassen - 2022 - Philosophy and Technology 35 (4):1-20.
Two Reasons for Subjecting Medical AI Systems to Lower Standards than Humans.Jakob Mainz, Jens Christian Bjerring & Lauritz Munch - 2023 - Acm Proceedings of Fairness, Accountability, and Transaparency (Facct) 2023 1 (1):44-49.


Added to PP

875 (#18,322)

6 months
245 (#12,107)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Atoosa Kasirzadeh
University of Toronto, St. George Campus (PhD)
Mario Günther
Ludwig Maximilians Universität, München