Event Abstract

Evaluating and Modeling Human-Machine Teaming and Trust in Automation while on the Road

  • 1 United States Air Force Academy, Warfighter Effectiveness Research Center, United States
  • 2 West Virginia University, Human Performance & Applied Neuroscience, United States

The United States Air Force has indicated a need to move towards development and incorporation of increasingly automated systems into operational settings (Department of Defense, 2015). This includes everything from an unmanned aircraft flying missions alongside piloted 5th generation fighters (Warwick, 2017) to utilization of and supervisory control over various other automated assets (Chung, 2016). However, much of the work focusing on human-machine teaming has been done in a sterile lab environment, devoid of any real consequences associated with a miscalibration of trust, a lack of situation awareness, or too-high of workload. Even through these laboratory studies provide experimental control they do not fulling capture the participant’s range of physiological response since there is no personal risk in participating in the simulation (i.e. no injury to crash, no threat to missed alert). There has been a lot of work previously to show the consequences of automation (Onnasch, Wickens, Li, & Manzey, 2014; Sebok & Wickens, 2017; Wickens, 1992). However, recent commercial technological advancements have allowed for a greater than ever ability to study human-automation interaction (HAI) in a context which provides greater ecological validity, and a glimpse into the workings of the ‘brain in the wild’. For that reason, we have established a mobile research laboratory we are calling the HART (Human-Automation Research in a Tesla) mobile lab. This mobile lab environment is set up in a 2017 Tesla Model X, equipped with various automated features which include lane-following, adaptive cruise control (ACC), and automated parking (Tesla, 2017). While the effects of these features have been written about in a naturalistic study of their effects on situational awareness (SA) (Endsley, 2017), we stand poised to study trust, SA, and workload as has never been done before. Within the HART mobile lab we have included five distinct pieces of technology for data recording. The one thing unifying all of this technology is its mobility, which allows us to collect a multitude of data in the most ecologically valid way. As seen in Figure1 the participant is outfitted with physiological sensor while monitoring the operations of the vehicle. First is the Tobii Pro Glasses system, which allows for mobile and naturalistic eye tracking. This system is worn like any other pair of glasses, and collects data on eye fixation, scan patterns, eye blinks and pupil dilation. Next is the NeuroTechnologies BioRadio. This radio transmission device will allow for mobile physiological measurement via electrocardiogram (ECG) and galvanic skin response (GSR). Via both the ECG and GSR, we will be able to ascertain real time arousal information which can be temporally linked with events such as transfer of control (TOC) to the automation, as well as violations of trust and trust repair. The third piece of technology is the Advanced Brain Monitoring B-Alert X24 Mobile electroencephalography (EEG). This EEG will allow us to measure workload and attention (Berka et al., 2007). Additionally, cameras mounted inside the car will capture the interior and exterior environment but also the participant’s face to analyze their emotional and cognitive states based on Erkman and Friesen (1978) action unit measurements of facial muscle movement. The final piece of technology being used is a RaceCapture telemetry system, which will allow for the real-time recording of vehicle data (i.e. acceleration, braking, and steering). As mentioned above, all of these technologies provide unprecedented mobility to study effects of trust, workload, and SA. Several projects are already under way to look directly at these topics. One study is currently examining the propensity to trust the automation over time; by having a participant behind the wheel for a series of 10 parking tasks, we are modeling how their trust in the system evolves with exposure to it. Also, this setup allows us to examine the neural and physiological correlates which would predict one’s propensity to take over control. We are also examining trust in the self-driving capabilities as a function of whether the automation has failed. By exploiting certain inabilities of the system currently, the automation will either perform an error of omission, omission, or no error at all. From there, the vehicle will approach a stop sign (which it currently does not possess the ability to detect), and we will examine at what point the participant intervenes. This allows us to examine not only reactions to violations of trust, SA, and the workload associated with supervising the automation, but also the degree to which different types of errors/violations effect the aforementioned variables. Finally, while there have been reported generational gaps in trust of automated cars, it has focused on self-reports of trust (“Consumer”, 2016). We are working on several studies which will utilize a cross-section of the population in order to ascertain trust as it relates to behavior and interaction with automation. In addition to the ways in which the technologies can be used in isolation from one another, this data also allows for modeling of trust and development of artificial neural networks. By developing these artificial neural networks, we can build systems and artificial intelligence, which can monitor the operator in a way that will allow for greater transparency, adaptability, and communication between human and machine. In conclusion, the HART mobile lab here at the United States Air Force Academy represents a new frontier in research on human-machine teaming by evaluating the ways which humans trust in, and rely on, automation. This research will allow for evaluation with high ecological validity, as well as in an environment with real-world consequences for failure of automation. If we, as a field, are to generalize our research findings to not only the real-world, but specifically the military, it is imperative to evaluate our methodologies under the lens of ecological validity. There needs to be consideration of how trust evolves over time, how it is repaired when violated, and how to teach an AI system to understand the human, when there are real consequences to miscalibrated trust.

Figure 1

Keywords: human-machine interaction, Trust in automation, physiological monitoring, naturalistic data collection, human performance modeling

Conference: 2nd International Neuroergonomics Conference, Philadelphia, PA, United States, 27 Jun - 29 Jun, 2018.

Presentation Type: Poster Presentation

Topic: Neuroergonomics

Citation: Tenhundfeld N, De Visser E, Tossell C and Finomore V (2019). Evaluating and Modeling Human-Machine Teaming and Trust in Automation while on the Road. Conference Abstract: 2nd International Neuroergonomics Conference. doi: 10.3389/conf.fnhum.2018.227.00092

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 10 Apr 2018; Published Online: 27 Sep 2019.

* Correspondence: Dr. Victor Finomore, West Virginia University, Human Performance & Applied Neuroscience, Morgantown, United States, victor.finomore@hsc.wvu.edu