Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Combining sensory cues for spatial updating: The minimal sensory context to enhance mental rotations

MPG-Autoren
/persons/resource/persons84281

Vidal,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Vidal, M., Lehmann, A., & Bülthoff, H. (2008). Combining sensory cues for spatial updating: The minimal sensory context to enhance mental rotations. Poster presented at 9th International Multisensory Research Forum (IMRF 2008), Hamburg, Germany.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-C853-9
Zusammenfassung
Mental rotation is the capacity to predict the outcome of spatial relationships after a change in viewpoint, which arises arise either from the test array rotation or the observer rotation. Several studies have reported that the cognitive cost of a mental rotation is reduced when the change in viewpoint result from the observer’s motion, which can be explained by the possibility to use spatial updating mechanism involved during self-motion. However, little is known about how this process is triggered and how the various sensory cues available might contribute to the updating performance. We used a virtual reality setup to study mental rotations that for the first time allowed investigating different combinations of modalities stimulated during the viewpoint changes. In an earlier study we validated this platform by replicating the classical advantage found for a moving observer (Lehmann, Vidal, Bülthoff, 2007). In following experiments we showed: First, increasing the opportunities for spatial binding (by displaying the rotation of the tabletop on which the test objects lay) was sufficient to significantly reduce the mental rotation cost. Second, a single modality stimulated during the observer’s motion (Vision or Body) is not enough to trigger the advantage. Third, combining two modalities (Body Vision or Body Audition) significantly improves the mental rotation performance. These results are discussed in terms of sensory-independent triggering of spatial updating during self-motion, with additive effects when sensory modalities are co-activated. In conclusion, we propose a new sensory-based framework that can account for all of the results reported in previous work, including some apparent contradictions about the role of extra-retinal cues.