EEG-based Decoding of Selective Visual Attention in Superimposed Videos

Abstract

Selective attention enables humans to efficiently process visual stimuli by enhancing important locations or objects and filtering out irrelevant information. Locating visual attention is a fundamental problem in neuroscience with potential applications in brain-computer interfaces. Conventional paradigms often use synthetic stimuli or static images, but visual stimuli in real life contain smooth and highly irregular dynamics. In this study, we show that these irregular dynamics in natural videos can be decoded from electroencephalography (EEG) signals to perform selective visual attention decoding. To this end, we propose an experimental paradigm in which participants attend to one of two superimposed videos, each showing a center-aligned person performing a stage act. We then train a stimulus-informed decoder to extract EEG signal components that are correlated with the motion patterns of the attended object, and show that this decoder can be used on unseen data to detect which of both objects is attended. Eye movements are also found to be correlated to the motion patterns in the attended video, despite the spatial overlap between the target and the distractor. We further show that these eye movements do not dominantly drive the EEG-based decoding and that complementary information exists in EEG and gaze data. Moreover, our results indicate that EEG also captures information about unattended objects. To our knowledge, this study is the first to explore EEG-based selective visual attention decoding on natural videos, opening new possibilities for experiment design in related fields.

Publication
arXiv
Yuanyuan Yao
Yuanyuan Yao
PhD student