Event Abstract

Assessment of Top-Down Attention for a Closed-Loop Performance Enhancement System Using High-Frequency Steady-State Visually Evoked Potentials and Eye Tracking

  • 1 Lockheed Martin (United States), Emerging Technology Laboratory, United States
  • 2 Wright State University, Department of Psychology, United States

Tasks involving distributed attention are increasingly common in the workplace, and technology that guides users to direct their attention towards moment-to-moment prioritization of subtasks will facilitate training and performance. Systems for managing multi-vehicle assets represent a common example of a distributed attention task in commercial and military settings, where the operator’s attention needs to change focus depending on the momentary status of a scenario. For example, vehicle flight paths must be optimized to maintain safe distances and achieve mission objectives, but, at times, the operator’s attention must shift to other subtasks like reporting the status of a vehicle to other teammates. Current state-of-the-art attention reallocation interfaces mainly use eye-tracking-based measures and therefore assume that that top-down visual attention is always directed towards a point of foveation (overt attention). However, operators may learn to employ covert attention (direction of attention toward peripheral vision) to improve performance, or they suffer a lapse in attention while still foveating on high-priority regions of a user interface (UI). These confounding factors reduce the overall effectiveness of attention reallocation algorithms by issuing frequent and poorly-timed interventions that are not reflective of the operator’s state and may even be disruptive to performance. To improve on the state of the art, we develop attention detection algorithms that distinguish between foveation and top-down attention based off of an ecologically valid distributed attention task. We measure attentional processes from electroencephalogram (EEG) recordings to assess high-frequency steady-state visually evoked potentials (ssVEPs) that are embedded within the user-interface of a multi-asset command and control simulation software called FUSION (Rowe, et al., 2015). Study participants interact with FUSION using a two-monitor system, where each display includes components relevant to distinct subtasks, and each monitor is set to flicker at a predefined frequency that is above the threshold at which humans perceive the oscillating image as solid. Participant’s use the system to direct aerial assets in a search task in a map window and report targets they discover in a chat window. Additionally, participants are required to answer questions about the assets under their control via the chat window. By dividing the map and chat windows across the two monitors, we are able to tag each of these subtasks with different flicker frequencies. Because the stimuli are presented at higher frequencies than most studies involving ssVEPs, we devise novel detection algorithms that are computationally tractable for a real-time attention reallocation system. We compare three different approaches to extracting the signal of interest. Because these analyses generate a large number of features, we refine the features necessary to predict attentional states using dimensionality reduction across frequencies and EEG channels. This reduced feature set is then used to train a machine learning classifier that predicts attentional state. This ssVEP based assessment of attention is used to corroborate findings from a novel eye-tracking-based model of foveation. Gaze fixation on a particular sub-task in the UI suggests the participant is attending to it, at least via foveation, and the results of ssVEP-based assessment of top-down attention are used to confirm this. In cases, where the top-down attention diverges from the foveation, our system disambiguates inattention from covert attention.

Acknowledgements

Human Performance Sensing: Call 002 BAA

Keywords: Attention, steady state visually evoked potentials (ssVEP), Eye-tracking, EEG, Psychophysiology, multi-vehicle tracking

Conference: 2nd International Neuroergonomics Conference, Philadelphia, PA, United States, 27 Jun - 29 Jun, 2018.

Presentation Type: Oral Presentation

Topic: Neuroergonomics

Citation: Pava MJ, Alexander WC, Collins GJ, Galego BJ, Russo JC, Harel A, Fox OM, Hansen NE and Russell BA (2019). Assessment of Top-Down Attention for a Closed-Loop Performance Enhancement System Using High-Frequency Steady-State Visually Evoked Potentials and Eye Tracking. Conference Abstract: 2nd International Neuroergonomics Conference. doi: 10.3389/conf.fnhum.2018.227.00050

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 02 Apr 2018; Published Online: 27 Sep 2019.

* Correspondence:
Dr. Matthew J Pava, Lockheed Martin (United States), Emerging Technology Laboratory, Bethesda, Virginia, United States, mjpava@gmail.com
Dr. Bartlett A Russell, Lockheed Martin (United States), Emerging Technology Laboratory, Bethesda, Virginia, United States, bartlett.a.russell@lmco.com