Elsevier

Artificial Intelligence

Volume 298, September 2021, 103503
Artificial Intelligence

Toward personalized XAI: A case study in intelligent tutoring systems

https://doi.org/10.1016/j.artint.2021.103503Get rights and content

Abstract

Our research is a step toward ascertaining the need for personalization in XAI, and we do so in the context of investigating the value of explanations of AI-driven hints and feedback in Intelligent Tutoring Systems (ITS). We added an explanation functionality to the Adaptive CSP (ACSP) applet, an interactive simulation that helps students learn an algorithm for constraint satisfaction problems by providing AI-driven hints adapted to their predicted level of learning. We present the design of the explanation functionality and the results of a controlled study to evaluate its impact on students' learning and perception of the ACPS hints. The study includes an analysis of how these outcomes are modulated by several user characteristics such as personality traits and cognitive abilities, to asses if explanations should be personalized to these characteristics. Our results indicate that providing explanations increase students' trust in the ACPS hints, perceived usefulness of the hints, and intention to use them again. In addition, we show that students' access of the ACSP explanation and learning gains are modulated by three user characteristics, Need for Cognition, Contentiousness and Reading Proficiency, providing insights on how to personalize the ACSP explanations to these traits, as well as initial evidence on the potential value of personalized Explainable AI (XAI) for ITS.

Introduction

Existing research on Explainable AI (XAI) suggests that having AI systems explain their inner workings to their end users1 can help foster transparency, interpretability, and trust (e.g., [3], [4], [5], [6]). However, there are also results suggesting that such explanations are not always wanted by or beneficial for all users (e.g., [7], [8], [9]). Our long-term goal is understanding when having AI systems provide explanations to justify their behavior is useful, and how this may depend on factors such as context, task criticality, and user differences (e.g., expertise, personality, cognitive abilities, and transient states like confusion or cognitive load). Our vision is that of a personalized XAI, endowing AI agents with the ability to understand when, and how to provide explanations to their end users.

As a step toward this vision, in this paper, we present and evaluate an explanation functionality for the hints provided in the Adaptive CSP (ACSP) applet, an Intelligent Tutoring System (ITS) that helps students learn an algorithm to solve constraint satisfaction problems. ITS research investigates how to create educational systems that can model students' relevant needs, states, and abilities (e.g., domain knowledge, meta-cognitive abilities, affective states) and how to provide personalized instruction accordingly [10]. We chose to focus on an ITS in this paper because—despite increasing interest in XAI research encompassing applications such as recommender systems [4], [11], [12], [13], [14], office assistants [8], and intelligent everyday interactive systems (i.e. Google Suggest, iTunes Genius, etc.) [7]—thus far there has been comparatively less research on XAI in ITS. Most of this research focused on leveraging explanations to help students learning the target skills and knowledge. Namely, here the AI is used to enable an ITS to generate correct solutions for target problems (e.g. a medical diagnosis [15], [16], or suitable negotiation tactics [17], [18]), and the ITS includes mechanisms to allow students to ask for explanations on the solution process, as a way to facilitate learning. On the other hand, there is still limited research on leveraging XAI to justify the pedagogical decisions of an ITS [19]. Yet, an ITS's aim of delivering highly individualized pedagogical interventions makes the educational context a high-stake one for AI, because such interventions may have a potentially long-lasting impact on people's learning and development. If explanations can increase transparency and interpretability of an ITS pedagogical actions, this might improve both the ITS effectiveness as well as the acceptance from both students and educators [3].

Related research has sought to increase ITS transparency by having an ITS show its assessment of students' relevant abilities via an Open Learner Model (OLM [20]), with initial results showing that this can help improve student learning (e.g., [21]) and learning abilities (e.g., ability to self-assess; [22]). There is also anecdotal evidence that an OLM can impact students' trust [23].

In this paper, we go beyond OLM and investigate the effect of having an ITS generate more explicit explanations of both its assessment of the students as well as the pedagogical actions that the ITS puts forward based on this assessment. We also look at whether there is an impact of specific user characteristics on explanation usage and effectiveness.

A formal comparison of students interacting with the ACSP applet with and without the explanation functionality shows that the explanations available improve students' trust in the ACPS hints, perceived usefulness of the hints, and intention to use the system again. Our results also show the impact of user characteristics on how much students look at explanations when they are available, as well as on their learning gains, which constitutes useful insights to inform the design of personalized XAI for ITS.

Despite the fact that varied reactions to explanations have been observed with several AI-driven interactive systems (e.g., [4], [5], [7], [9]), thus far there has been limited work looking at linking these reactions to user characteristics. Existing results on explanations in recommender systems have shown an impact of Need for Cognition [12] (a personality trait [24]), of Openness (also personality traits [25]) and music sophistication [26], and user decision-making style (rational vs. intuitive) [27] on explanations in recommender systems; Naveed et al. [28] of shown impact of perceived user expertise for explanations of an intelligent assistant. Our results contribute to this line of research by: (i) looking at explanations for a different type of intelligent system (an ITS); (ii) confirming an impact of Need for Cognition; (iii) showing the effect of an additional personality trait (Conscientiousness) as well as of a cognitive ability related to reading proficiency. Thus, our findings broaden the understanding of which user differences should be further investigated when designing personalized XAI in a variety of application domains.

The rest of the paper is structured as follows. Section 2 discusses related work. Section 3 describes the ACSP and the AI mechanisms that drive its adaptive hints. Section 4 illustrates the explanation functionality we added to the ACSP and Section 5 the study to evaluate it along with the impact of user characteristics. Section 6 presents results related to usage and perception of the explanation functionality, whereas Section 7 reports results on the impact of explanations on student learning and perception of the ACSP hints. Section 8 provides a summary discussion of the results, and Section 9 wraps up with conclusions, limitations and future work.

Section snippets

Related work

There are encouraging results on the helpfulness of explanations in intelligent user interfaces. For example, Kulesza et al. [5] investigated explaining the predictions of an agent that helps its users organize their emails. They showed that explanations helped participants understand the system's underlying mechanism, enabling them to provide feedback to improve the agent's predictions. Coppers et al. [29] added explanations to an intelligent translation system, to describe how a suggested

Interactive simulation for AC-3

The ACSP applet is an interactive simulation that provides tools and personalized support for students to explore the workings of the Arc Consistency 3 (AC-3) algorithm for solving constraint satisfaction problems [43]. AC-3 represents a constraint satisfaction problem as a network of variable nodes and constraint arcs. The algorithm iteratively makes individual arcs consistent by removing variable domain values inconsistent with a given constraint, until it has considered all arcs and the

Pilot user study

As a first step to build an explanation functionality for the ACSP applet, we wanted to gain an initial understanding of the type of explanations that students would like to have about the ACSP hints. To do so, we instrumented ACSP with a tool to elicit this information.

Namely, we added to each hint's dialogue box a button (“explain hint”) that enables a panel (shown in Fig. 4), allowing students to choose one or more of the following options for explanations they would have liked for these

User study

This section describes the user study that we conducted to investigating i) whether explanations on the ACSP hints influence how students perceive and learn from the adaptive hints; ii) if this influence, as well as explanation usage depend on specific user characteristic.

The study followed a between subject design with two conditions in which participants interacted with the ACSP with and without explanation functionality (explanation and control condition, respectively).

Usage and perception of explanations

This section presents results on how the 30 participants in the explanation condition of the study used and perceived the explanation functionality. Of these 30 participants, 24 accessed the explanation functionality at least once. Section 6.1 reports results on how these users rated the explanation functionality, Section 6.2 looks at how and how much they use it, and Section 6.3 reports whether the amount of usage is modulated by the user characteristics described in Section 5.2. Section 6.4

Effects of explanations on learning and users' perception of the hints

In this section, we investigate whether leveraging the ACSP explanations has an effect on student learning and on their perception of the ACSP hints, and whether these effects may be affected by user characteristics.

We do so by comparing learning and hints perception between participants in the explanation condition who accessed the explanation functionality, and the participants in the control conditions. We exclude from the analysis the six users in the explanation condition who did not

Discussion

Designing an explanation functionality that conveys to ACSP users at least some of the AI mechanisms driving the ACSP adaptive hints is challenging, because of the complexity of such mechanisms. The study presented in this paper aimed to ascertain if our first attempt at an explanation functionality that illustrates such AI mechanisms can have a positive impact on how students perceived the ACSP hints and learn from them.

Our results indicate that accessing these explanations lead to students

Conclusions, limitations and future work

This paper contributes to understanding the need for personalization in XAI. Although the importance of having AI artifacts that can explain their actions to their users is undisputed, there is mounting evidence that one-size-fits-all explanations are not ideal, and that explanations may need to be tailored to several factors including context, task criticality, and specific user needs. Our research focuses on the latter and, in this paper, we present a case study that investigates the need for

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

This work was supported by the Natural Sciences and Engineering Research Council of Canada NSERC [Grant #22R01881].

References (71)

  • A. Bunt et al.

    Are explanations always important? A study of deployed, low-cost intelligent interactive systems

  • A. Bunt et al.

    Understanding the utility of rationale in a mixed-initiative system for GUI customization

  • K. Ehrlich et al.

    Taking advice from intelligent systems: the double-edged sword of explanations

  • B.P. Woolf

    Building Intelligent Interactive Tutors: Student-centered Strategies for Revolutionizing E-learning

    (2010)
  • T. Kulesza et al.

    Too much, too little, or just right? Ways explanations impact end users' mental models

  • M. Millecamp et al.

    To explain or not to explain: the effects of personal characteristics when explaining music recommendations

  • I. Nunes et al.

    A systematic review and taxonomy of explanations in decision support and recommender systems

    User Model. User-Adapt. Interact.

    (Dec. 2017)
  • M. Wiebe et al.

    Exploring user attitudes towards different approaches to command recommendation in feature-rich software

  • W.J. Clancey

    From Guidon to Neomycin and Heracles in twenty Short Lessons

    AIM Mag.

    (Jul. 1986)
  • W.J. Clancey

    Guidon-manage revisited: a socio-technical systems approach

  • M. Core

    Teaching negotiation skills through practice and reflection with virtual humans

    SIMULATION

    (Nov. 2006)
  • H.C. Lane et al.

    Explainable artificial intelligence for training and tutoring

  • G. Zhou et al.

    Improving student-system interaction through data-driven explanations of hierarchical reinforcement learning induced pedagogical policies

  • S. Bull et al.

    SMILI
    Image 1
    : a framework for interfaces to learning data in open learner models, learning analytics and related fields

    J. Artif. Intell. Educ.

    (Mar. 2016)
  • Y. Long et al.

    Enhancing learning outcomes through self-regulated learning support with an open learner model

    User Model. User-Adapt. Interact.

    (Mar. 2017)
  • K. Porayska-Pomsta et al.

    Adolescents' self-regulation during job interviews through an AI coaching environment

  • A. Mabbott et al.

    Student preferences for editing, persuading, and negotiating the open learner model

  • J.T. Cacioppo et al.

    The efficient assessment of need for cognition

    J. Pers. Assess.

    (Jun. 1984)
  • M. Millecamp et al.

    What's in a user? Towards personalising transparency for music recommender interfaces

  • S. Naveed et al.

    Argumentation-based explanations in recommender systems: conceptual framework and empirical results

  • J. Schaffer et al.

    I can do better than your AI: expertise and explanations

  • S. Coppers

    Intellingo: an intelligible translation environment

  • M.T. Ribeiro et al.

    “Why should I trust you?”: explaining the predictions of any classifier

  • M.T. Ribeiro et al.

    Anchors: high-precision model-agnostic explanations

  • A.R. Akula et al.

    CoCoX: generating conceptual and counterfactual explanations via fault-lines

  • This paper is part of the Special Issue on Explainable AI.

    View full text