English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Spatial updating in real and virtual environments: contribution and interaction of visual and vestibular cues

MPS-Authors
/persons/resource/persons84170

Riecke,  BE
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Riecke, B., & Bülthoff, H. (2004). Spatial updating in real and virtual environments: contribution and interaction of visual and vestibular cues. In V. Interrante, A. McNamara, H. Bülthoff, & H. Rushmeier (Eds.), APGV '04: 1st Symposium on Applied Perception in Graphics and Visualization (pp. 9-17). New York, NY, USA: ACM Press.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-D827-E
Abstract
INTRODUCTION: When we move through the environment, the self-to-surround relations constantly change. Nevertheless, we perceive the world as stable. A process that is critical to this perceived stability is "spatial updating", which automatically updates our egocentric mental spatial representation of the surround according to our current self-motion. According to the prevailing opinion, vestibular and proprioceptive cues are absolutely required for spatial updating. Here, we challenge this notion by varying visual and vestibular contributions independently in a high-fidelity VR setup. METHODS: In a learning phase, participants learned the positions of twelve targets attached to the walls of a 5x5m room. In the testing phase, participants saw either the real room or a photo-realistic copy presented via a head-mounted display (HMD). Vestibular cues were applied using a motion platform. Participants' task was to point "as accurately and quickly as possible" to four targets announced consecutively via headphones after rotations around the vertical axis into different positions. RESULTS: Automatic spatial updating was observed whenever useful visual information was available: Paticipants had no problem mentally updating their orientation in space, irrespective of turning angle. Performance, quantified as response time, configuration error, and pointing error, was best in the real world condition. However, when the field of view was limited via cardboard blinders to match that of the HMD (40 × 30°), performance decreased and was comparable to the HMD condition. Presenting turning information only visually (through the HMD) hardly altered those results. In both the real world and HMD conditions, spatial updating was obligatory in the sense that it was significantly more difficult to ignore ego-turns (i.e., "point as if not having turned") than to update them as usual. CONCLUSION: The rapid pointing paradigm proved to be a useful tool for quantifying spatial updating. We conclude that, at least for the limited turning angles used (<60°), the Virtual Reality simulation of ego-rotation was as effective and convincing (i.e., hard to ignore) as its real world counterpart, even when only visual information was presented. This has relevant implications for the design of motion simulators for, e.g., architecture walkthroughs.