Eye-Tracking

Your gaze tells a lot about what is going on in your head: about how you react to what you see and how you choose to accomplish a task.

In particular, at the SGL we are interested in how you look while looking for objects (aka  visual search) in complex every-day scenes (like a bedroom or a kitchen).

To study gaze movements, we rely on eye tracking devices. These devices have cameras pointed at your eye(s), they are calculating the position of your gaze in 2D (e.g., on a screen) or 3D (e.g., in virtual reality or in real-life situations) several hundred times a second.

Once we know the direction of your gaze we can determine what you were looking at and how you looked at the scene.

The what part allows us to calculate how many times your attention was directed towards a particular object, this can be interesting if this object is a distractor or a search target.

The how is also very interesting, because from raw gaze directions data we can identify fixations and saccades. Fixations are somewhat long periods of time (approx. 300ms) during which your gaze is stable on a part of the scene, this lets you observe a region of interest. On the other hand, a saccade is a very fast, "ballistic" movement of the eye that serves to change which part of the scene you perceive with high acuity (with your fovea).

At the SGL, we use eye tracking on-screen and in virtual reality to investigate how visual attention is directed in scene in search of objects, in particular when those objects are placed in locations that are consistent or inconsistent with scene grammar.

In such an experiment, we implemented a "gaze-contingent" protocol, this is an experimental protocol that uses gaze data sent by the eye-tracker to update the visual content displayed on a screen. You can see in the videos below that we drew a mask at the location of the gaze to remove central or peripheral vision.

Related publications

David, E. J., Beitner, J. & Võ, M. L.-H. (2021). The importance of peripheral vision when searching 3D real-world scenes: A gaze-contingent study in virtual reality. Journal of Vision, 21(7), 3. doi: doi.org/10.1167/jov.21.7.3 pdf

 

Beitner, J., Helbing, J., Draschkow, D., & Võ, M. L.-H. (2021). Get Your Guidance Going: Investigating the Activation of Spatial Priors for Efficient Search in Virtual Reality. Brain Sciences, 11(1), 44. doi: doi.org/10.3390/brainsci11010044 pdf

 

David, E., Beitner, J., & Võ, M. L.-H. (2020). Effects of Transient Loss of Vision on Head and Eye Movements during Visual Search in a Virtual Environment. Brain Sciences, 10(11), 841.
doi: doi.org/10.3390/brainsci10110841 pdf

 

Öhlschläger, S., & Võ, M. L. H. (2020). Development of scene knowledge: Evidence from explicit and implicit scene knowledge measures. Journal of Experimental Child Psychology, 194, 104782. doi: doi.org/10.1016/j.jecp.2019.104782 pdf

 

Helbing, J., Draschkow*, D., & Võ, M. L.-H. (2020). Search superiority: Goal-directed attentional allocation creates more reliable incidental identity and location memory than explicit encoding in naturalistic virtual environments. Cognition, 196, 104147.
doi: doi.org/10.1016/j.cognition.2019.104147 pdf

 

Boettcher, S. E. P., Draschkow, D., Dienhart, E., & Võ, M. L.-H. (2018). Anchoring visual search in scene: Assessing the role of anchor objects on eye movements during visual search. Journal of Vision18(13), 11. doi: doi.org/10.1167/18.13.11 pdf

 

Draschkow, D., & Võ, M. L.-H. (2017). Scene grammar shapes the way we interact with objects, strenghtens memories, and speeds search. Scientific Reports, 7(1), 16471.
doi: doi.org/10.1038/s41598-017-16739-x pdf

 

Kok, E. M., Aizenman, A. M., Võ, M. L.-H., & Wolfe, J. M. (2017). Even if I showed you where you looked, remembering where you just looked is hard. Journal of Vision, 17(12). 
doi: doi.org/10.1167/17.12.2 pdf

 

Draschkow, D., & Võ, M. L.-H. (2016). Of “what” and “where” in a natural search task: Active object handling supports object location memory beyond the object’s identity. Attention, Perception & Psychophysics, 78, 1574-1584.
doi: doi.org/10.3758/s13414-016-1111-x pdf