de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Bericht

Visual Homing is possible without Landmarks: A Path Integration Study in Virtual Reality

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons84170

Riecke,  BE
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84273

van Veen,  HAHC
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Riecke, B., van Veen, H., & Bülthoff, H.(2000). Visual Homing is possible without Landmarks: A Path Integration Study in Virtual Reality (82).


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-E467-E
Zusammenfassung
The literature often suggests that proprioceptive and especially vestibular cues are required for navigation and spatial orientation tasks involving rotations of the observer. To test this notion, we conducted a set of experiments in virtual reality where only visual cues were provided. Subjects had to execute turns, reproduce distances or perform triangle completion tasks: After following two prescribed segments of a triangle, subjects had to return directly to the unmarked starting point. Subjects were seated in the center of a half-cylindrical 180 degree projection screen and controlled the visually simulated ego-motion with mouse buttons. Most experiments were performed in a simulated 3D field of blobs providing a convincing feeling of self-motion (vection) but no landmarks, thus restricting navigation strategies to path integration based on optic flow. Other experimental conditions included salient landmarks or landmarks that were only temporarily available. Optic flow information alone proved to be sufficient for untrained subjects to perform turns and reproduce distances with negligible systematic errors, irrespective of movement velocity. Path integration by optic flow was sufficient for homing by triangle completion, but homing distances were biased towards mean responses. Additional landmarks that were only temporarily available did not improve homing performance. However, navigation by stable, reliable landmarks led to almost perfect homing performance. Mental spatial ability test scores correlated positively with homing performance especially for the more complex triangle completion tasks, suggesting that mental spatial abilities might be a determining factor for navigation performance. Compared to similar experiments using virtual environments (Péruch et al., 1997; Bud, 2000) or blind locomotion (Loomis et al., 1993), we did not find the typically observed distance undershoot and strong regression towards mean turn responses. Using a virtual reality setup with a half-cylindrical 180 degree projection screen allowed us to demonstrate that visual path integration without any vestibular or kinesthetic cues is sufficient for elementary navigation tasks like rotations, translations, and homing via triangle completion.