de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Multimodal integration in the estimation of walked distances

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons84378

Campos,  J
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83842

Butler,  J
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84088

Mohler,  B
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Campos, J., Butler, J., Mohler, B., & Bülthoff, H. (2008). Multimodal integration in the estimation of walked distances. Poster presented at 9th International Multisensory Research Forum (IMRF 2008), Hamburg, Germany.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-C86F-D
Zusammenfassung
When walking through space, both, dynamic visual information (i.e. optic flow), and body-based information (i.e., proprioceptive/efference copy and vestibular) jointly specify the magnitude of a distance travelled. While recent evidence has demonstrated the extent to which each of these cues can be used independently, relatively little is known about how they are integrated when simultaneously present. In this series of experiments, participants first travelled along a predefined distance and subsequently matched this distance by adjusting an egocentric, in-depth target. Visual information was presented via a head-mounted display and consisted of a long, richly textured, virtual hallway. Body-based cues were provided either by walking in a fully-tracked, free-walking space or by walking on a large, linear treadmill. Travelled distances were provided either through optic flow alone, body-based cues alone (i.e. blindfolded walking), or through both cues combined. In the combined condition, visually-specified distances were either congruent (1.0x) or incongruent (0.7x or 1.4x) with distances specified by body-based cues. The incongruencies were introduced either by changing the visual gain during natural walking or the proprioceptive gain during treadmill walking. Responses reflect a combined effect of both visual and body-based information, with an overall higher influence of body-based cues.