Fixation-Related Potentials during Scene Perception

Our visual world is complex, but its composition usually follows a set of rather strict rules. Knowledge of these rules - we call them a scene’s “grammar” or à la Biederman et al. (1982) scene semantics and syntax - helps us recognize objects and guide search in scenes. The finding that there are neural correlates of processing semantic and syntactic inconsistencies (Võ & Wolfe, 2013) allows for new ways of studying the “when” and “where” of scene grammar processing. Can we process higher-level object-scene inconsistencies in our visual periphery? And how do these modulate eye movement behavior prior to and upon their fixation? To study the relationship between eye movements and scene grammar processing in the brain, an ideal experiment would involve measuring neural responses and eye movements simultaneously. Moreover, we would like to do so while people look around a scene unrestrictedly. Allowing people to move their eyes while recording EEG faces a researcher with a multitude of challenges, ranging from synchronizing two streams of data to the artifacts in the EEG caused by the eyeball rotation and the diversity in physical features of the stimulus material. Recently, efforts to design, record and analyze experiments with simultaneous recordings of EEG and eye movements have been successful in the field of reading (Dimigen et al., 2012). Our long-term goal is to measure what processes are dominant in the brain when viewing certain parts of a scene, by extending such methods from reading research into the field of scene perception.

Related publications

Cornelissen, T., Sassenhagen, J., & Võ, M. L.-H. (2019). Improving free-viewing fixation-related EEG potentials with continuous-time regression. Journal of Neuroscience Methods, 313, 77-94. doi: doi.org/10.1016/j.jneumeth.2018.12.010 pdf