de.mpg.escidoc.pubman.appbase.FacesBean
English
 
Help Guide Disclaimer Contact us Login
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Combining sensory cues for spatial updating: The minimal sensory context to enhance mental rotations

MPS-Authors
http://pubman.mpdl.mpg.de/cone/persons/resource/persons84281

Vidal,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Vidal, M., Lehmann, A., & Bülthoff, H. (2008). Combining sensory cues for spatial updating: The minimal sensory context to enhance mental rotations. Poster presented at 9th International Multisensory Research Forum (IMRF 2008), Hamburg, Germany.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0013-C853-9
Abstract
Mental rotation is the capacity to predict the outcome of spatial relationships after a change in viewpoint, which arises arise either from the test array rotation or the observer rotation. Several studies have reported that the cognitive cost of a mental rotation is reduced when the change in viewpoint result from the observer’s motion, which can be explained by the possibility to use spatial updating mechanism involved during self-motion. However, little is known about how this process is triggered and how the various sensory cues available might contribute to the updating performance. We used a virtual reality setup to study mental rotations that for the first time allowed investigating different combinations of modalities stimulated during the viewpoint changes. In an earlier study we validated this platform by replicating the classical advantage found for a moving observer (Lehmann, Vidal, Bülthoff, 2007). In following experiments we showed: First, increasing the opportunities for spatial binding (by displaying the rotation of the tabletop on which the test objects lay) was sufficient to significantly reduce the mental rotation cost. Second, a single modality stimulated during the observer’s motion (Vision or Body) is not enough to trigger the advantage. Third, combining two modalities (Body Vision or Body Audition) significantly improves the mental rotation performance. These results are discussed in terms of sensory-independent triggering of spatial updating during self-motion, with additive effects when sensory modalities are co-activated. In conclusion, we propose a new sensory-based framework that can account for all of the results reported in previous work, including some apparent contradictions about the role of extra-retinal cues.