Event Abstract

Using Deep Convolutional Neural Networks to Develop the Next Generation of Sensors for Interpreting Real World EEG Signals Part 2: Developing Sensors for Vigilance Detection

  • 1 DCS Corporation (United States), United States
  • 2 United States Army Research Laboratory, United States

Current electroencephalography (EEG)-based brain-computer interface (BCI) development efforts have focused almost exclusively on improving the accuracy of the underlying user- and paradigm-specific models. One reason for this is that BCIs have primarily been used in clinical populations to restore compromised functionality, and the accuracy of such systems is of utmost importance. In contrast, there has been relatively little research into the generalization capabilities of BCIs, and almost no work investigating how BCIs could be used as general-purpose sensors that reliably operate independent of context. However, as interest in BCIs used to monitor healthy individuals performing everyday tasks (passive BCIs) continues to expand, it is critical that the field focus on both accuracy and generalization, as the context surrounding everyday tasks will change dynamically and without warning. Here we describe the second of two efforts to use convolutional neural networks (CNNs) to develop BCIs that work across individuals and application domains. We use the EEGNet algorithm, which is a CNN previously shown to generalize well to a variety of BCI paradigms, in both event-related and oscillatory contexts. The layers within EEGNet essentially produce a convolutional version of the standard EEG spatial filter, but one that works in both space and time, as well as across multiple underlying neural components. Previous work has shown this approach works well in cross-subject and cross-domain transfer learning for visual target detection, and here we investigate its application to continuous state monitoring. Specifically, we are interested in detecting states most commonly associated with drowsiness, fatigue, or boredom. To assess cross-domain generalization, we trained the EEGNet model on data from one experiment and tested the learned model on data from a separate similar, but distinct, experiment. Our training data was collected from a simulated driving experiment where the subject had to maintain lane position while responding to laterally-directed perturbations. The length of the experiment, and the nature of the task, were designed to induce boredom and/or drowsiness. We analyzed reaction times to the lateral perturbations and defined good and bad states—which we refer to here as “non-drowsy” and “drowsy” for convenience—as those time segments with the fastest and slowest reaction times, respectively. We developed a generalized model, training on more than 12,000 2-second epochs from 14 subjects, and verified classification capability by assessing across-subject performance using leave-one-out cross-validation. Our test dataset was also from a simulated driving task, where subjects (n = 9) were responsible for 1) maintaining lane position in the presence of lateral perturbations, 2) maintaining a safe distance from a lead vehicle, and 3) performing a visual target discrimination task where pedestrians appeared along the side of the road. We used our generalized model to continuously classify the data from each subject. The results showed that blocks of data labeled as “drowsy” by the CNN were twice as likely to produce a driving error. Along with our companion article describing the application of EEGNet to modeling visual system function, these results demonstrate 1) the utility of CNNs for BCI generalization, and 2) the potential for BCI systems to operate independent of context as a general-purpose sensor.

Acknowledgements

This research was sponsored by the Army Research Laboratory under ARL-74A-HRCYB and through Cooperative Agreement Number W911NF-10-2-0022. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation

Keywords: BCI, deep learning, EEG, state monitoring, vigilance

Conference: 2nd International Neuroergonomics Conference, Philadelphia, PA, United States, 27 Jun - 29 Jun, 2018.

Presentation Type: Poster Presentation

Topic: Neuroergonomics

Citation: McDaniel J, Solon A, Lawhern V, Metcalfe J, Marathe A and Gordon S (2019). Using Deep Convolutional Neural Networks to Develop the Next Generation of Sensors for Interpreting Real World EEG Signals Part 2: Developing Sensors for Vigilance Detection. Conference Abstract: 2nd International Neuroergonomics Conference. doi: 10.3389/conf.fnhum.2018.227.00037

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 02 Apr 2018; Published Online: 27 Sep 2019.

* Correspondence: Mr. Jonathan McDaniel, DCS Corporation (United States), Alexandria, Virginia, 22310, United States, jmcdaniel@dcscorp.com