Elsevier

Cognition

Volume 222, May 2022, 105020
Cognition

Leveraging human agency to improve confidence and acceptability in human-machine interactions

https://doi.org/10.1016/j.cognition.2022.105020Get rights and content

Abstract

Repeated interactions with automated systems are known to affect how agents experience their own actions and choices. The present study explores the possibility of partially restoring sense of agency in operators interacting with automated systems by providing additional information about the system's decision, i.e. its confidence. To do so, we implemented an obstacle avoidance task with different levels of automation and explicability. Levels of automation were varied by implementing conditions in which the participant was free or not free to choose which direction to take, whereas levels of explicability were varied by providing or not providing the participant with the system's confidence in the direction to take. We first assessed how automation and explicability interacted with participants' sense of agency, and then tested whether increased self-agency was systematically associated with greater confidence in the decision and improved system acceptability. The results showed an overall positive effect of system assistance. Providing additional information about the system's decision (explicability effect) and reducing the cognitive load associated with the decision itself (automation effect) was associated with stronger sense of agency, greater confidence in the decision, and better performance. In addition to the positive effects of system assistance, acceptability scores revealed that participants perceived “explicable” systems more favorably. These results highlight the potential value of studying self-agency in human-machine interaction as a guideline for making automation technologies more acceptable and, ultimately, improving the usefulness of these technologies.

Introduction

Human agents are used to interacting with sophisticated computer systems designed to help them in their activities and a significant number of our daily actions are technologically mediated. This is the case of the airplane pilot who controls their aircraft through increasingly sophisticated pilot assistance tools, but also that of the doctor assisted in their diagnoses by artificial intelligence algorithms. This paramount importance of technological assistance is further demonstrated by the growing role that virtual assistants – be it our phones, computers or specific planning devices – play in our daily lives.

Repeated interactions with automated systems can be expected to affect the way individuals experience their own actions and choices. Such experience is often referred to as “sense of agency” (SoA) and describes the subjective feeling associated with controlling one's own actions and, through these actions, events in the outside world (Haggard & Tsakiris, 2009). SoA plays a key role in guiding attributions of responsibility (Bigenwald & Chambon, 2019) and serves as a key motivational force for human behaviour (Di Costa, Théro, Chambon, & Haggard, 2018). A detrimental effect of human-machine interactions on SoA has been demonstrated in a number of experimental studies, one of the most typical of which implements an aircraft supervision task (e.g. Berberian, Sarrazin, Le Blaye, & Haggard, 2012). In this task, the participant is responsible for supervising the movement of an aircraft that may encounter unpredictable obstacles. When a conflict occurs due to the presence of another aircraft, the participant has to decide and implement the appropriate control to avoid the obstacle using a button-based interface. Following an established classification (Sheridan & Verplank, 1978), the levels of automation of the task were manipulated from the user having complete control (no automation) to the computer performing the entire task with the participant simply observing (full automation). The results showed a decrease in the participant's sense of agency concomitant with the increase in automation, and suggest that increasing the level of automation tends to distract operators from the results of the action and alter the emergence of a sense of control.

What makes automation particularly detrimental to the operator's sense of agency is not yet fully understood, but there is a relative consensus that the lack of transparency on how the system makes its decisions, or simply operates, is a key factor (Christoffersen & Woods, 2002; Klien, Woods, Bradshaw, Hoffman, & Feltovich, 2004). In most human-machine interactions, most of these decision processes are unknown, inaccessible, or even not explainable at all (Norman, 1990). Such opacity makes it difficult for the operator to link system intention to actual state and to predict the sequence of events that will occur. Predictive mechanisms are known to play a critical role in the development of individuals' sense of agency, by allowing the attribution of observed sensory events to prior intentions (Chambon, Sidarus, & Haggard, 2014; Chambon, Wenke, Fleming, Prinz, & Haggard, 2013; Haggard & Eitam, 2015). By affecting the predictability of actions, the inherent opacity of technological systems is therefore likely to alter perceived agency in human operators.

In a previous study, we directly investigated how system predictability impacts the development of agency experience during human-machine interaction (Le Goff, Rey, Haggard, Oullier, & Berberian, 2018). Particularly, we explored the benefit of prime messages regarding system intention while supervising an automated system. We tested whether providing information about what to do next mitigated the deleterious effect of reduced freedom of action on agency and, in doing so, increased the user's level of acceptability, along with increased control and performance. Our results suggest that displaying the system's intentions prior to an action is a good candidate for maximizing the experience of agency in supervisory task, and for increasing system acceptability as well. These preliminary results open interesting avenue as to how to modulate the emergence of the experience of agency during human-machine interaction.

The present study aims to go further in exploring the information required for making technological systems more intelligible, and to test whether improving intelligibility concomitantly increases the level of agency experienced by human operators. In two distinct experiments, we explored the role of communicating specific metacognitive information on improving the SoA of participants interacting with distinct automated systems. Specifically, both experiments implemented an avoidance task with different levels of automation and explicability.

Levels of automation were varied by implementing conditions in which the participant was free to choose which direction to take (free choice trials) or not (forced choice trials). Levels of explicability were varied by providing or not providing the participant with the system's confidence in the direction to take. Confidence can be seen as a measure of the uncertainty (or certainty) associated with one's choice or action (Fleming & Lau, 2014). Communicating confidence was intended to improve explicability of the system's decision, by increasing its transparency, that is, by providing the participant with additional information, such as the level of confidence associated with that decision (Tintarev & Masthoff, 2015). Indeed, the level of uncertainty (or confidence) associated with a decision is a key explanatory factor for why a decision is made or not, and whether or not that decision will be updated or revised in the future (Balsdon, Wyart, & Mamassian, 2020). The beneficial role of confidence on decision making has already been demonstrated in group settings, where sharing metacognitive representation increases joint performance (Bahrami et al., 2010; Fusaroli et al., 2012; Le Bars et al., 2020) and enhances team coordination (Lausic, 2009; Le Bars, Devaux, Nevidal, Chambon, & Pacherie, 2020; Poizat, Bourbousson, Saury, & Sève, 2009). Communicating confidence also makes performance more fluid and prospectively improves SoA (Chambon, Filevich, & Haggard, 2014; Sidarus, Vuorre, & Haggard, 2017), especially when sensorimotor information is not available (Pacherie, 2013) such as when interacting with an automated system. Finally, there is indirect evidence that improving the operator's SoA during interaction with automated systems concomitantly improves acceptability of the system's decision itself (Le Goff et al., 2018). In addition to exploring the relationships between explicability and SoA, we also tested whether an increase in the participant's SoA could be consistently associated with greater confidence in their decision and greater acceptability of the system.

In both experiments, the participant's choice and three additional measures were collected: (i) Temporal Binding (TB), a widely-reported temporal compression between a voluntary action and its consequence (hence originally referred to as “intentional binding”), as an implicit proxy of the participant's sense of agency (Caspar, 2017; Ebert & Wegner, 2010; Haggard, Clark, & Kalogeras, 2002; Vogel, Jording, Esser, Weiss, & Vogeley, 2021); (ii) a measure of the participant's confidence in either their decision (free trials) or the system's decision (forced trials); and finally, (iii) the perceived acceptability of the system by the participant (Van Der Laan, Heino, & De Waard, 1997).

In Experiment 1, we had three key predictions: (1) Participants would experience lower levels of agency when forced to follow the system's decision, compared to freely choosing (automation effect) (Barlas, Hockley, & Obhi, 2018; Berberian et al., 2012; Caspar, Cleeremans, & Haggard, 2018); (2) communicating to participants the system's confidence in the best decision would restore or even improve participants' SoA (explicability effect) (Sidarus et al., 2017); and finally (3) an increase in the participant's SoA would be associated with an increase in system acceptability (Le Goff et al., 2018). In Experiment 2, we further explored the relationships between our control measure, our decision confidence measure, and task demands. We leveraged a procedure developed in a previous study (Potts & Carlson, 2019) to clarify the contribution of task difficulty to the relationship between automation, explicability and sense of agency, using a modified version of the avoidance task from Experiment 1.

Section snippets

Participants

Forty-four participants were recruited to participate in Experiment 1 (31 females, mean age = 33.2, SD = 8.4). In the absence of existing data with regard to our research goal, sample size was determined a priori on the basis of previous studies on SoA using temporal binding measures in a similar experimental design (free vs. forced-choice trials, Caspar, 2017). With this in mind, we targeted a sample size of 44 participants, similar to that of Caspar (2017), with a potential dropout/exclusion

Performance on the avoidance task - % correct responses

The ANOVA revealed significant main effects of the explicability (mean correct rates, guided = 87.88, SD = 11.59; mean correct rates, unguided = 81.71, SD = 15.42, F(1,36) = 43.01; p < 0.001; ηp2 = 0.544) and the difficulty factors (F(2,72) = 322,89; p < 0.001; ηp2 = 0.900). Post hoc comparisons further showed that participants gave more correct answers as the difficulty of the task decreased (all p's < 0.001). A significant explicability-by-difficulty interaction was also found

Participant

Thirty-nine participants were recruited to participate in Experiment 2 (28 females, mean age: 34,23, SD: 9.24). Sample size calculation was based on the effects found in Experiment 1. A priori power calculation was performed using the G*Power software (Faul et al., 2009), with a power of 0.80 and two-sided alpha level set at 0.05. The number of participants required to detect a mean effect size of d = 0.4 in a paired comparison, with ~10% exclusion in the sample based on predefined exclusion

Avoidance task performance - % correct responses

The ANOVA revealed significant main effects of the explicability (F(1,30) = 13,00; p = 0.001; ηp2 = 0.302) and the difficulty factors (F(1,30) = 304,64; p < 0.001; ηp2 = 0.910) on mean correct response rates. Post hoc comparisons showed that the performance was higher in guided trials than in unguided trials (p = 0.001) and decreased with increasing difficulty (all p's < 0.0001). The explicability-by-difficulty interaction effect was not significant (F(2,60) = 0.44; p = 0.645).

Temporal binding

We found

General discussion

Our two experiments aimed at characterizing the key factors responsible for the sense of agency (SoA) in an operator interacting with an automated system. As previously suggested, reducing the opacity of system decisions can contribute to improving human-machine interactions and, by extension, the operator's sense of control over the decisions made during the task (Berberian et al., 2012; Norman, 1990). To test this suggestion more directly, we designed a task in which levels of automation and

Conclusions and perspectives

In two experiments, we showed that explicability could be used as a lever to improve the agency of operators interacting with automated systems. Improving the explicability of the decisions of the system itself increases – in free-choice trials – or restores – in forced-choice trials – operators' sense of agency (SoA). Importantly, we found that the difficulty of the task at hand modulated the relationship between explicability and automation. When the subject acts alone and receives no

Author contributions

Q.V., B.B., M.P. and V.C. developed the study concept. Testing, data collection and data analysis were performed by Q.V. Q.V. drafted the manuscript. B.B. and V.C. provided critical revisions. All authors approved the final version of the manuscript for submission.

Declaration of Competing Interest

None.

Acknowledgements

This work was supported by the Agence Nationale de la Recherche (ANR) grants ANR-17-EURE-0017 (Frontiers in Cognition), ANR-10-IDEX-0001-02 PSL (program ‘Investissements d'Avenir’) and ANR-16-CE37-0012-01 (ANR JCJ), ANR-19-CE37-0014-01 and ANR-21-CE37-0020-02 (ANR PRC). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript.

Author contributions

Q.V., B.B., M.P. and V.C. developed the study concept. Testing, data collection and data analysis were performed by Q.V. Q.V. drafted the manuscript. B.B. and V.C. provided critical revisions. All authors approved the final version of the manuscript for submission.

References (78)

  • S. Le Bars et al.

    Agents' pivotality and reward fairness modulate sense of agency in cooperative joint action

    Cognition

    (2020)
  • J.W. Moore et al.

    Intentional binding and the sense of agency: A review

    Consciousness and Cognition

    (2012)
  • A. Sahaï et al.

    Action co-representation and the sense of agency during a joint Simon task: Comparing human and machine co-agents

    Consciousness and Cognition

    (2019)
  • N. Sidarus et al.

    Priming of actions increases sense of control over unexpected outcomes

    Consciousness and Cognition

    (2013)
  • N. Sidarus et al.

    How action selection influences the sense of agency: An ERP study

    NeuroImage

    (2017)
  • M. Synofzik et al.

    Beyond the comparator model: A multifactorial two-step account of agency

    Consciousness and Cognition

    (2008)
  • J.D. Van Der Laan et al.

    A simple procedure for the assessment of acceptance of advanced transport telematics

    Transportation Research Part C: Emerging Technologies

    (1997)
  • R.P. van der Wel et al.

    The sense of agency during skill learning in individuals and dyads

    Consciousness and Cognition

    (2012)
  • W. Wen et al.

    The influence of action-outcome delay and arousal on sense of agency and the intentional binding effect

    Consciousness and Cognition

    (2015)
  • D. Wenke et al.

    Subliminal priming of actions influences sense of control over effects of action

    Cognition

    (2010)
  • B. Bahrami et al.

    Optimally interacting minds

    Science

    (2010)
  • T. Balsdon et al.

    Confidence controls perceptual evidence accumulation

    Nature Communications

    (2020)
  • Z. Barlas et al.

    Effects of free choice and outcome valence on the sense of agency: Evidence from measures of intentional binding and feelings of control

    Experimental Brain Research

    (2018)
  • B. Berberian et al.

    Automation technology and sense of control: A window on human agency

    PLoS One

    (2012)
  • A. Bigenwald et al.

    Criminal responsibility and neuroscience: No revolution yet

    Frontiers in Psychology: Theoretical and Philosophical Psychology

    (2019)
  • D.H. Brainard

    The psychophysics toolbox

    Spatial Vision

    (1997)
  • E.A. Caspar

    Coercition et perte d’agentivité

    Médecine/Sciences

    (2017)
  • E.A. Caspar et al.

    Only giving orders? An experimental study of the sense of agency when giving or receiving commands

    PLoS One

    (2018)
  • E.A. Caspar et al.

    How using brain-machine interfaces influences the human sense of agency

    PLoS One

    (2021)
  • V. Chambon et al.

    What is the human sense of agency, and is it metacognitive?

  • V. Chambon et al.

    14 premotor or Ideomotor: How does the experience of action come about? Action Science: Foundations of an emerging discipline

    (2013)
  • V. Chambon et al.

    TMS stimulation over the inferior parietal cortex disrupts prospective sense of agency

    Brain Structure and Function

    (2015)
  • V. Chambon et al.

    From action intentions to action effects: How does the sense of agency come about?

    Frontiers in Human Neuroscience

    (2014)
  • V. Chambon et al.

    Information about action outcomes differentially affects learning from self-determined versus imposed choices

    Nature Human Behaviour

    (2020)
  • V. Chambon et al.

    An online neural substrate for sense of agency

    Cerebral Cortex

    (2013)
  • K. Christoffersen et al.

    1. How to make automated systems team players

  • D. Coyle et al.

    I did that! Measuring users’ experience of agency in their own actions

    (2012)
  • T.G.E. Damen et al.

    On the other hand: Nondominant hand use increases sense of agency

    Social Psychological and Personality Science

    (2014)
  • J.A. Dewey et al.

    Do implicit and explicit measures of the sense of agency measure the same thing?

    PLoS One

    (2014)
  • Cited by (8)

    • Overview and Perspectives on the Assessment and Mitigation of Cognitive Fatigue in Operational Settings

      2023, Cyber–Physical–Human Systems: Fundamentals and Applications
    • Constraining the Sense of Agency in Human-Machine Interaction

      2023, International Journal of Human-Computer Interaction
    View all citing articles on Scopus
    View full text