Toward personalized XAI: A case study in intelligent tutoring systems☆
Introduction
Existing research on Explainable AI (XAI) suggests that having AI systems explain their inner workings to their end users1 can help foster transparency, interpretability, and trust (e.g., [3], [4], [5], [6]). However, there are also results suggesting that such explanations are not always wanted by or beneficial for all users (e.g., [7], [8], [9]). Our long-term goal is understanding when having AI systems provide explanations to justify their behavior is useful, and how this may depend on factors such as context, task criticality, and user differences (e.g., expertise, personality, cognitive abilities, and transient states like confusion or cognitive load). Our vision is that of a personalized XAI, endowing AI agents with the ability to understand when, and how to provide explanations to their end users.
As a step toward this vision, in this paper, we present and evaluate an explanation functionality for the hints provided in the Adaptive CSP (ACSP) applet, an Intelligent Tutoring System (ITS) that helps students learn an algorithm to solve constraint satisfaction problems. ITS research investigates how to create educational systems that can model students' relevant needs, states, and abilities (e.g., domain knowledge, meta-cognitive abilities, affective states) and how to provide personalized instruction accordingly [10]. We chose to focus on an ITS in this paper because—despite increasing interest in XAI research encompassing applications such as recommender systems [4], [11], [12], [13], [14], office assistants [8], and intelligent everyday interactive systems (i.e. Google Suggest, iTunes Genius, etc.) [7]—thus far there has been comparatively less research on XAI in ITS. Most of this research focused on leveraging explanations to help students learning the target skills and knowledge. Namely, here the AI is used to enable an ITS to generate correct solutions for target problems (e.g. a medical diagnosis [15], [16], or suitable negotiation tactics [17], [18]), and the ITS includes mechanisms to allow students to ask for explanations on the solution process, as a way to facilitate learning. On the other hand, there is still limited research on leveraging XAI to justify the pedagogical decisions of an ITS [19]. Yet, an ITS's aim of delivering highly individualized pedagogical interventions makes the educational context a high-stake one for AI, because such interventions may have a potentially long-lasting impact on people's learning and development. If explanations can increase transparency and interpretability of an ITS pedagogical actions, this might improve both the ITS effectiveness as well as the acceptance from both students and educators [3].
Related research has sought to increase ITS transparency by having an ITS show its assessment of students' relevant abilities via an Open Learner Model (OLM [20]), with initial results showing that this can help improve student learning (e.g., [21]) and learning abilities (e.g., ability to self-assess; [22]). There is also anecdotal evidence that an OLM can impact students' trust [23].
In this paper, we go beyond OLM and investigate the effect of having an ITS generate more explicit explanations of both its assessment of the students as well as the pedagogical actions that the ITS puts forward based on this assessment. We also look at whether there is an impact of specific user characteristics on explanation usage and effectiveness.
A formal comparison of students interacting with the ACSP applet with and without the explanation functionality shows that the explanations available improve students' trust in the ACPS hints, perceived usefulness of the hints, and intention to use the system again. Our results also show the impact of user characteristics on how much students look at explanations when they are available, as well as on their learning gains, which constitutes useful insights to inform the design of personalized XAI for ITS.
Despite the fact that varied reactions to explanations have been observed with several AI-driven interactive systems (e.g., [4], [5], [7], [9]), thus far there has been limited work looking at linking these reactions to user characteristics. Existing results on explanations in recommender systems have shown an impact of Need for Cognition [12] (a personality trait [24]), of Openness (also personality traits [25]) and music sophistication [26], and user decision-making style (rational vs. intuitive) [27] on explanations in recommender systems; Naveed et al. [28] of shown impact of perceived user expertise for explanations of an intelligent assistant. Our results contribute to this line of research by: (i) looking at explanations for a different type of intelligent system (an ITS); (ii) confirming an impact of Need for Cognition; (iii) showing the effect of an additional personality trait (Conscientiousness) as well as of a cognitive ability related to reading proficiency. Thus, our findings broaden the understanding of which user differences should be further investigated when designing personalized XAI in a variety of application domains.
The rest of the paper is structured as follows. Section 2 discusses related work. Section 3 describes the ACSP and the AI mechanisms that drive its adaptive hints. Section 4 illustrates the explanation functionality we added to the ACSP and Section 5 the study to evaluate it along with the impact of user characteristics. Section 6 presents results related to usage and perception of the explanation functionality, whereas Section 7 reports results on the impact of explanations on student learning and perception of the ACSP hints. Section 8 provides a summary discussion of the results, and Section 9 wraps up with conclusions, limitations and future work.
Section snippets
Related work
There are encouraging results on the helpfulness of explanations in intelligent user interfaces. For example, Kulesza et al. [5] investigated explaining the predictions of an agent that helps its users organize their emails. They showed that explanations helped participants understand the system's underlying mechanism, enabling them to provide feedback to improve the agent's predictions. Coppers et al. [29] added explanations to an intelligent translation system, to describe how a suggested
Interactive simulation for AC-3
The ACSP applet is an interactive simulation that provides tools and personalized support for students to explore the workings of the Arc Consistency 3 (AC-3) algorithm for solving constraint satisfaction problems [43]. AC-3 represents a constraint satisfaction problem as a network of variable nodes and constraint arcs. The algorithm iteratively makes individual arcs consistent by removing variable domain values inconsistent with a given constraint, until it has considered all arcs and the
Pilot user study
As a first step to build an explanation functionality for the ACSP applet, we wanted to gain an initial understanding of the type of explanations that students would like to have about the ACSP hints. To do so, we instrumented ACSP with a tool to elicit this information.
Namely, we added to each hint's dialogue box a button (“explain hint”) that enables a panel (shown in Fig. 4), allowing students to choose one or more of the following options for explanations they would have liked for these
User study
This section describes the user study that we conducted to investigating i) whether explanations on the ACSP hints influence how students perceive and learn from the adaptive hints; ii) if this influence, as well as explanation usage depend on specific user characteristic.
The study followed a between subject design with two conditions in which participants interacted with the ACSP with and without explanation functionality (explanation and control condition, respectively).
Usage and perception of explanations
This section presents results on how the 30 participants in the explanation condition of the study used and perceived the explanation functionality. Of these 30 participants, 24 accessed the explanation functionality at least once. Section 6.1 reports results on how these users rated the explanation functionality, Section 6.2 looks at how and how much they use it, and Section 6.3 reports whether the amount of usage is modulated by the user characteristics described in Section 5.2. Section 6.4
Effects of explanations on learning and users' perception of the hints
In this section, we investigate whether leveraging the ACSP explanations has an effect on student learning and on their perception of the ACSP hints, and whether these effects may be affected by user characteristics.
We do so by comparing learning and hints perception between participants in the explanation condition who accessed the explanation functionality, and the participants in the control conditions. We exclude from the analysis the six users in the explanation condition who did not
Discussion
Designing an explanation functionality that conveys to ACSP users at least some of the AI mechanisms driving the ACSP adaptive hints is challenging, because of the complexity of such mechanisms. The study presented in this paper aimed to ascertain if our first attempt at an explanation functionality that illustrates such AI mechanisms can have a positive impact on how students perceived the ACSP hints and learn from them.
Our results indicate that accessing these explanations lead to students
Conclusions, limitations and future work
This paper contributes to understanding the need for personalization in XAI. Although the importance of having AI artifacts that can explain their actions to their users is undisputed, there is mounting evidence that one-size-fits-all explanations are not ideal, and that explanations may need to be tailored to several factors including context, task criticality, and specific user needs. Our research focuses on the latter and, in this paper, we present a case study that investigates the need for
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgements
This work was supported by the Natural Sciences and Engineering Research Council of Canada NSERC [Grant #22R01881].
References (71)
Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI
Inf. Fusion
(Jun. 2020)- et al.
A very brief measure of the big-five personality domains
J. Res. Pers.
(Dec. 2003) - et al.
Pedagogy and usability in interactive algorithm visualizations: designing and evaluating CIspace
Interact. Comput.
(Jan. 2008) Explanation in artificial intelligence: insights from the social sciences
Artif. Intell.
(Feb. 2019)The five-dimensional curiosity scale: capturing the bandwidth of curiosity and identifying four unique subgroups of curious people
J. Res. Pers.
(Apr. 2018)- et al.
Visual analytics in deep learning: an interrogative survey for the next frontiers
IEEE Trans. Vis. Comput. Graph.
(Aug. 2019) - et al.
AI in education needs interpretable machine learning: lessons from open learner modelling
- et al.
Explaining collaborative filtering recommendations
- et al.
Principles of explanatory debugging to personalize interactive machine learning
AI for explaining decisions in multi-agent environments
Are explanations always important? A study of deployed, low-cost intelligent interactive systems
Understanding the utility of rationale in a mixed-initiative system for GUI customization
Taking advice from intelligent systems: the double-edged sword of explanations
Building Intelligent Interactive Tutors: Student-centered Strategies for Revolutionizing E-learning
Too much, too little, or just right? Ways explanations impact end users' mental models
To explain or not to explain: the effects of personal characteristics when explaining music recommendations
A systematic review and taxonomy of explanations in decision support and recommender systems
User Model. User-Adapt. Interact.
Exploring user attitudes towards different approaches to command recommendation in feature-rich software
From Guidon to Neomycin and Heracles in twenty Short Lessons
AIM Mag.
Guidon-manage revisited: a socio-technical systems approach
Teaching negotiation skills through practice and reflection with virtual humans
SIMULATION
Explainable artificial intelligence for training and tutoring
Improving student-system interaction through data-driven explanations of hierarchical reinforcement learning induced pedagogical policies
SMILI: a framework for interfaces to learning data in open learner models, learning analytics and related fields
J. Artif. Intell. Educ.
Enhancing learning outcomes through self-regulated learning support with an open learner model
User Model. User-Adapt. Interact.
Adolescents' self-regulation during job interviews through an AI coaching environment
Student preferences for editing, persuading, and negotiating the open learner model
The efficient assessment of need for cognition
J. Pers. Assess.
What's in a user? Towards personalising transparency for music recommender interfaces
Argumentation-based explanations in recommender systems: conceptual framework and empirical results
I can do better than your AI: expertise and explanations
Intellingo: an intelligible translation environment
“Why should I trust you?”: explaining the predictions of any classifier
Anchors: high-precision model-agnostic explanations
CoCoX: generating conceptual and counterfactual explanations via fault-lines
Cited by (56)
Towards Balancing Preference and Performance through Adaptive Personalized Explainability
2024, ACM/IEEE International Conference on Human-Robot InteractionA Survey of Explainable Knowledge Tracing
2024, arXivEvaluating the Effectiveness of Bayesian Knowledge Tracing Model-Based Explainable Recommender
2024, International Journal of Distance Education TechnologiesHuman vs. AI: Exploring students’ preferences between human and AI TA and the effect of social anxiety and problem complexity
2024, Education and Information Technologies
- ☆
This paper is part of the Special Issue on Explainable AI.