de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Datenschutzhinweis Impressum Kontakt
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

Top-Down and Multi-Modal Influences on Self-Motion Perception in Virtual Reality

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons84170

Riecke,  BE
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84199

Schulte-Pelkum,  J
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Riecke, B., Västfjäll, L., & Schulte-Pelkum, J. (2005). Top-Down and Multi-Modal Influences on Self-Motion Perception in Virtual Reality. In 11th International Conference on Human-Computer Interaction (HCI International 2005) (pp. 1-10). Mahwah, NJ, USA: Erlbaum.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-D52F-3
Zusammenfassung
INTRODUCTION: Much of the work on self-motion perception and simulation has investigated the contribution of physical stimulus properties (so-called “bottom-up” factors). This paper provides an overview of recent experiments demonstrating that illusory self-motion perception can also benefit from “top-down” mechanisms, e.g. expectations, the interpretation and meaning associated with the stimulus, and the resulting spatial presence in the simulated environment. METHODS: Several VR setups were used as a means to independently control different sensory modalities, thus allowing for well-controlled and reproducible psychophysical experiments. Illusory self-motion perception (vection) was induced using rotating visual or binaural auditory stimuli, presented via a curved projection screen (FOV: 54x40.5°) or headphones, respectively. Additional vibrations, subsonic sound, or cognitive frameworks were applied in some trials. Vection was quantified in terms of onset time, intensity, and convincingness ratings. RESULTS DISCUSSION: Auditory vection studies showed that sound sources participants associated with stationary “acoustic landmarks” (e.g., a fountain) can significantly increase the effectiveness of the self-motion illusion, as compared to sound sources that are typically associated to moving objects (like the sound of footsteps). A similar top-down effect was observed in a visual vection experiment: Showing a rotating naturalistic scene in VR improved vection considerably compared to scrambled versions of the same scene. Hence, the possibility to interpret the stimulus as a stationary reference frame seems to enhance the self-motion perception, which challenges the prevailing opinion that self-motion perception is primarily bottom-up driven. Even the mere knowledge that one might potentially be moved physically increased the convincingness of the self-motion illusion significantly, especially when additional vibrations supported the interpretation that one was really moving. CONCLUSIONS: Various topdown mechanisms were shown to increase the effectiveness of self-motion simulations in VR, even though they have received little attention in the literature up to now. Thus, we posit that a perceptually-oriented approach that combines both bottom-up and top-down factors will ultimately enable us to optimize self-motion simulations in terms of both effectiveness and costs.