Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque

AI and Ethics (forthcoming)
  Copy   BIBTEX

Abstract

Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument overlooks that human decision-making is sometimes significantly more transparent and trustworthy than algorithmic decision-making. This is because when people explain their decisions by giving reasons for them, this frequently prompts those giving the reasons to govern or regulate themselves so as to think and act in ways that confirm their reason reports. AI explanation systems lack this self-regulative feature. Overlooking it when comparing algorithmic and human decision-making can result in underestimations of the transparency of human decision-making and in the development of explainable AI that may mislead people by activating generally warranted beliefs about the regulative dimension of reason-giving.

Other Versions

No versions found

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Explainability, Public Reason, and Medical Artificial Intelligence.Michael Da Silva - 2023 - Ethical Theory and Moral Practice 26 (5):743-762.
Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making.Suzanne Tolmeijer, Markus Christen, Serhiy Kandul, Markus Kneer & Abraham Bernstein - 2022 - Proceedings of the 2022 Chi Conference on Human Factors in Computing Systems 160:160:1–17.

Analytics

Added to PP
2022-09-29

Downloads
883 (#20,094)

6 months
377 (#5,453)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Uwe Peters
Utrecht University

Citations of this work

Explainable AI in the military domain.Nathan Gabriel Wood - 2024 - Ethics and Information Technology 26 (2):1-13.

Add more citations