AI and Society:1-14 (forthcoming)
AbstractStrategies for improving the explainability of artificial agents are a key approach to support the understandability of artificial agents’ decision-making processes and their trustworthiness. However, since explanations are not inclined to standardization, finding solutions that fit the algorithmic-based decision-making processes of artificial agents poses a compelling challenge. This paper addresses the concept of trust in relation to complementary aspects that play a role in interpersonal and human–agent relationships, such as users’ confidence and their perception of artificial agents’ reliability. Particularly, this paper focuses on non-expert users’ perspectives, since users with little technical knowledge are likely to benefit the most from “post-hoc”, everyday explanations. Drawing upon the explainable AI and social sciences literature, this paper investigates how artificial agent’s explainability and trust are interrelated at different stages of an interaction. Specifically, the possibility of implementing explainability as a trust building, trust maintenance and restoration strategy is investigated. To this extent, the paper identifies and discusses the intrinsic limits and fundamental features of explanations, such as structural qualities and communication strategies. Accordingly, this paper contributes to the debate by providing recommendations on how to maximize the effectiveness of explanations for supporting non-expert users’ understanding and trust.
Added to PP
Historical graph of downloads
References found in this work
Logic and Conversation.H. P. Grice - 1975 - In Donald Davidson & Gilbert Harman (eds.), The Logic of Grammar. Encino, CA: pp. 64-75.
Explanation in Artificial Intelligence: Insights From the Social Sciences.Tim Miller - 2019 - Artificial Intelligence 267:1-38.
Causal Explanation.David Lewis - 1986 - In Philosophical Papers Vol. Ii. Oxford University Press. pp. 214-240.
Citations of this work
No citations found.
Similar books and articles
Developing Artificial Agents Worthy of Trust: “Would You Buy a Used Car From This Artificial Agent?”. [REVIEW]F. S. Grodzinsky, K. W. Miller & M. J. Wolf - 2011 - Ethics and Information Technology 13 (1):17-27.
Trust and Multi-Agent Systems: Applying the Diffuse, Default Model of Trust to Experiments Involving Artificial Agents. [REVIEW]Jeff Buechner & Herman T. Tavani - 2011 - Ethics and Information Technology 13 (1):39-51.
Trust and Ecological Rationality in a Computing Context.Jeff Buechner - 2013 - Acm Sigcas Computers and Society 43 (1):47-68.
Modelling Trust in Artificial Agents, A First Step Toward the Analysis of E-Trust.Mariarosaria Taddeo - 2010 - Minds and Machines 20 (2):243-257.
Levels of Trust in the Context of Machine Ethics.Herman T. Tavani - 2015 - Philosophy and Technology 28 (1):75-90.
"I Don't Trust You, You Faker!" On Trust, Reliance, and Artificial Agency.Fabio Fossa - 2019 - Teoria 39 (1):63-80.
Believing in Black Boxes: Must Machine Learning in Healthcare Be Explainable to Be Evidence-Based?Liam McCoy, Connor Brenna, Stacy Chen, Karina Vold & Sunit Das - forthcoming - Journal of Clinical Epidemiology.
What Is the Model of Trust for Multi-agent Systems? Whether or Not E-Trust Applies to Autonomous Agents.Massimo Durante - 2010 - Knowledge, Technology & Policy 23 (3):347-366.
Trust Does Not Need to Be Human: It is Possible to Trust Medical AI.Andrea Ferrario, Michele Loi & Eleonora Viganò - 2021 - Journal of Medical Ethics 47 (6):437-438.
Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability.Mark Coeckelbergh - 2020 - Science and Engineering Ethics 26 (4):2051-2068.
Legal requirements on explainability in machine learning.Adrien Bibal, Michael Lognoul, Alexandre de Streel & Benoît Frénay - 2021 - Artificial Intelligence and Law 29 (2):149-169.
Explanatory Pragmatism: A Context-Sensitive Framework for Explainable Medical AI.Diana Robinson & Rune Nyrup - 2022 - Ethics and Information Technology 24 (1).
A Metacognitive Approach to Trust and a Case Study: Artificial Agency.Ioan Muntean - 2019 - Computer Ethics - Philosophical Enquiry (CEPE) Proceedings.
What Do We Want From Explainable Artificial Intelligence (XAI)? – A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research.Markus Langer, Daniel Oster, Timo Speith, Lena Kästner, Kevin Baum, Holger Hermanns, Eva Schmidt & Andreas Sesing - 2021 - Artificial Intelligence 296:103473.
Modeling Artificial Agents’ Actions in Context – a Deontic Cognitive Event Ontology.Miroslav Vacura - 2020 - Applied Ontology 15 (4):493-527.