de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

How do we know where we are? Contribution and interaction of visual and vestibular cues for spatial updating in real and virtual environments

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons84170

Riecke,  BE
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84287

von der Heyde,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Riecke, B., von der Heyde, M., & Bülthoff, H. (2001). How do we know where we are? Contribution and interaction of visual and vestibular cues for spatial updating in real and virtual environments. Poster presented at 4. Tübinger Wahrnehmungskonferenz (TWK 2001), Tübingen, Germany.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-E2E0-0
Zusammenfassung
In order to know where we are when moving through space, we constantly update our mental egocentric representation of the environment, matching it to our motion. This process, termed "spatial updating", is mostly automatic, effortless, and obligatory (i.e., hard-to-suppress). Our goal here is twofold: 1) To quantify spatial updating; 2) To investigate the importance and interaction of visual and vestibular cues for spatial updating. The stimuli consisted of twelve targets (the numbers from 1 to 12, arranged in a clockface manner) attached to the walls of a 5x5m room. Subjects saw either the real room or a photo-realistic 3D model of it presented via a head-mounted display (HMD). For vestibular stimulation, subjects were seated on a Stewart motion platform. After each rotation, the subjects' task was to point "as quickly and accurately as possible" to four targets announced consecutively via headphones. Spatial updating performance was quantified in terms of response time and pointing error (absolute error and variance) in three different spatial updating conditions: Subjects were (a) rotated to a different orientation (UPDATE condition); (b) rotated as in (a), but asked to ignore that rotation and "point as if not having turned" (IGNORE); (c) rotated to a new orientation and immediately back to the original orientation before being asked to point (CONTROL); Each of the twelve subjects was presented with six stimulus conditions (blocks A-F, 15 min. each) in balanced order, with different amount of visual and vestibular information available. Performance, especially response times, varied considerably between subjects, but showed the same overall pattern: 1) Performance was best in the real world condition (block A). When the field of view was limited via cardboard blinders (block B) to match that of the HMD (40x30°), performance decreased and was comparable to the HMD condition (block C). Presenting only visual information for the turns (through the HMD, block D) decreased the performance slightly further. 2) In those four blocks where there was visual information available about the rotation, subjects performed equally well in the UPDATE and CONTROL conditions. Performance in the IGNORE condition, however, was significantly impaired, indicating that spatial updating was indeed obligatory in the sense of being hard-to-suppress. 3) When subjects were blindfolded (block E) or saw a constant image of the scene (block F), IGNORE performance increased and was comparable to the UPDATE performance. This suggests that spatial updating was no longer obligatory when visual cues about the motion were removed. Speeded pointing tasks proved to be a viable method for quantifying "spatial updating". We conclude that, at least for the regular target arrangement and limited turning angles used (<60°), the Virtual Reality simulation of ego-rotation was as effective and convincing (i.e., hard to ignore) as its real world counterpart, even when only visual information was available.