Unexplainability and Incomprehensibility of Artificial Intelligence

Abstract

Explainability and comprehensibility of AI are important requirements for intelligent systems deployed in real-world domains. Users want and frequently need to understand how decisions impacting them are made. Similarly it is important to understand how an intelligent system functions for safety and security reasons. In this paper, we describe two complementary impossibility results (Unexplainability and Incomprehensibility), essentially showing that advanced AIs would not be able to accurately explain some of their decisions and for the decisions they could explain people would not understand some of those explanations.

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

  • Only published works are available at libraries.

Similar books and articles

Ai: Its Nature and Future.Margaret A. Boden - 2016 - Oxford University Press UK.
Explaining Explanations in AI.Brent Mittelstadt - forthcoming - FAT* 2019 Proceedings 1.
Risks of artificial general intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).

Analytics

Added to PP
2019-06-24

Downloads
7,216 (#611)

6 months
644 (#1,972)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Roman Yampolskiy
University of Louisville

References found in this work

Hypercomputation: Computing more than the Turing machine.Toby Ord - 2002 - Dissertation, University of Melbourne
Three Models for the Description of Language.N. Chomsky - 1956 - IRE Transactions on Information Theory 2:113-124.

Add more references