English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Language-Driven Anticipatory Eye Movements in Virtual Reality

MPS-Authors

Eichert,  Nicole
Neurobiology of Language Department, MPI for Psycholinguistics, Max Planck Society, Nijmegen, NL;
University of Oxford;

/persons/resource/persons38000

Peeters,  David
Neurobiology of Language Department, MPI for Psycholinguistics, Max Planck Society, Nijmegen, NL;

/persons/resource/persons69

Hagoort,  Peter
Neurobiology of Language Department, MPI for Psycholinguistics, Max Planck Society, Nijmegen, NL;
Donders Institute for Brain, Cognition and Behaviour, External Organizations;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

Eichert_Peeters_Hagoort_2018.pdf
(Publisher version), 844KB

Supplementary Material (public)

13428_2017_929_MOESM1_ESM.docx
(Supplementary material), 353KB

Citation

Eichert, N., Peeters, D., & Hagoort, P. (2018). Language-Driven Anticipatory Eye Movements in Virtual Reality. Behavior Research Methods, 50(3), 1102-1115. doi:10.3758/s13428-017-0929-z.


Cite as: https://hdl.handle.net/11858/00-001M-0000-002D-6F87-0
Abstract
Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. The use of this variant of the visual world paradigm has shown that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional (2D) stimuli that are mere abstractions of real world objects. Here we present a visual world paradigm study in a three-dimensional (3D) immersive virtual reality environment. Despite significant changes in the stimulus material and the different mode of stimulus presentation, language-mediated anticipatory eye movements were observed. These findings thus indicate prediction of upcoming words in language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eye-tracking in rich and multimodal 3D virtual environments.