English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

The nature of the visual environment induces implicit biases during language-mediated visual search

MPS-Authors
/persons/resource/persons79

Huettig,  Falk
Individual Differences in Language Processing Department, MPI for Psycholinguistics, Max Planck Society;
Mechanisms and Representations in Comprehending Speech, MPI for Psycholinguistics, Max Planck Society;
Coordination of Cognitive Systems, MPI for Psycholinguistics, Max Planck Society;
Donders Institute for Brain, Cognition and Behaviour, External Organizations;

/persons/resource/persons122

McQueen,  James M.
Mechanisms and Representations in Comprehending Speech, MPI for Psycholinguistics, Max Planck Society;
Donders Institute for Brain, Cognition and Behaviour, External Organizations;
Language Comprehension Department, MPI for Psycholinguistics, Max Planck Society;
Behavioural Science Institute, Radboud University Nijmegen;
Donders Institute for Brain, Cognition and Behaviour, External Organizations;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

Huettig_2011_nature.pdf
(Publisher version), 543KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Huettig, F., & McQueen, J. M. (2011). The nature of the visual environment induces implicit biases during language-mediated visual search. Memory & Cognition, 39, 1068-1084. doi:10.3758/s13421-011-0086-z.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-435F-5
Abstract
Four eye-tracking experiments examined whether semantic and visual-shape representations are routinely retrieved from printed-word displays and used during language-mediated visual search. Participants listened to sentences containing target words which were similar semantically or in shape to concepts invoked by concurrently-displayed printed words. In Experiment 1 the displays contained semantic and shape competitors of the targets, and two unrelated words. There were significant shifts in eye gaze as targets were heard towards semantic but not shape competitors. In Experiments 2-4, semantic competitors were replaced with unrelated words, semantically richer sentences were presented to encourage visual imagery, or participants rated the shape similarity of the stimuli before doing the eye-tracking task. In all cases there were no immediate shifts in eye gaze to shape competitors, even though, in response to the Experiment 1 spoken materials, participants looked to these competitors when they were presented as pictures (Huettig & McQueen, 2007). There was a late shape-competitor bias (more than 2500 ms after target onset) in all experiments. These data show that shape information is not used in online search of printed-word displays (whereas it is used with picture displays). The nature of the visual environment appears to induce implicit biases towards particular modes of processing during language-mediated visual search.