Minds and Machines 29 (4):495-514 (2019)

Scott Robbins
Universität Bonn
There is widespread agreement that there should be a principle requiring that artificial intelligence be ‘explicable’. Microsoft, Google, the World Economic Forum, the draft AI ethics guidelines for the EU commission, etc. all include a principle for AI that falls under the umbrella of ‘explicability’. Roughly, the principle states that “for AI to promote and not constrain human autonomy, our ‘decision about who should decide’ must be informed by knowledge of how AI would act instead of us” :689–707, 2018). There is a strong intuition that if an algorithm decides, for example, whether to give someone a loan, then that algorithm should be explicable. I argue here, however, that such a principle is misdirected. The property of requiring explicability should attach to a particular action or decision rather than the entity making that decision. It is the context and the potential harm resulting from decisions that drive the moral need for explicability—not the process by which decisions are reached. Related to this is the fact that AI is used for many low-risk purposes for which it would be unnecessary to require that it be explicable. A principle requiring explicability would prevent us from reaping the benefits of AI used in these situations. Finally, the explanations given by explicable AI are only fruitful if we already know which considerations are acceptable for the decision at hand. If we already have these considerations, then there is no need to use contemporary AI algorithms because standard automation would be available. In other words, a principle of explicability for AI makes the use of AI redundant.
Keywords No keywords specified (fix it)
Categories (categorize this paper)
DOI 10.1007/s11023-019-09509-3
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Translate to english
Revision history

Download options

PhilArchive copy

Upload a copy of this paper     Check publisher's policy     Papers currently archived: 65,703
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

Practical Reality.Jonathan Dancy - 2000 - Philosophy 78 (305):414-425.
Computer Systems: Moral Entities but Not Moral Agents. [REVIEW]Deborah G. Johnson - 2006 - Ethics and Information Technology 8 (4):195-204.

View all 6 references / Add more references

Citations of this work BETA

Technology as Terrorism: Police Control Technologies and Drone Warfare.Jessica Wolfendale - 2021 - In Scott Robbins, Alastair Reed, Seamus Miller & Adam Henschke (eds.), Counter-Terrorism, Ethics, and Technology: Emerging Challenges At The Frontiers Of Counter-Terrorism,. Springer. pp. 1-21.

View all 9 citations / Add more citations

Similar books and articles

PSR.Michael Della Rocca - 2010 - Philosophers' Imprint 10.
The Harm Principle.Nils Holtug - 2002 - Ethical Theory and Moral Practice 5 (4):357-389.
A Test of the Principle of Optimality.John D. Hey & Enrica Carbone - 2001 - Theory and Decision 50 (3):263-281.
Is the Precautionary Principle a Midlevel Principle?Per Sandin & Martin Peterson - 2019 - Ethics, Policy and Environment 22 (1):34-48.
In Search of a Pointless Decision Principle.Prasanta S. Bandyopadhayay - 1994 - PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1994:260 - 269.


Added to PP index

Total views
56 ( #194,953 of 2,462,587 )

Recent downloads (6 months)
6 ( #119,591 of 2,462,587 )

How can I increase my downloads?


My notes