English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Stabilization of oneself in Virtual Reality: Interaction of visual and vestibular cues

Kreher, B., von der Heyde, M., & Bülthoff, H. (2001). Stabilization of oneself in Virtual Reality: Interaction of visual and vestibular cues. Poster presented at 4. Tübinger Wahrnehmungskonferenz (TWK 2001), Tübingen, Germany.

Item is

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Kreher, BW1, Author           
von der Heyde, M1, Author           
Bülthoff, HH1, Author           
Bülthoff, H.H., Editor
Gegenfurtner, K.R., Editor
Mallot, H.A., Editor
Ulrich, R., Editor
Affiliations:
1Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              

Content

show
hide
Free keywords: -
 Abstract: Although different sensory organs have quite different characteristics, humans have no problems in their combined evaluation. Particularly, when asked to stabilize their position in space, humans need the capability to integrate the different sensory inputs. For the following stabilization task humans mainly use the vestibular, visual, and proprioceptive senses. To study this sensor fusion in a body stabilization task, we used a motion platform with six degrees of freedom for the vestibular stimulus, a head mounted display (HMD) for the visual stimulus, and a joystick as an input device. The motion platform and the HMD simulated the physical model of an inverse pendulum. Using the joystick, the subject could exert a force (acceleration) on the pendulum and thereby control the state of the model. In our experiments, the subjects had to balance themselves on the pendulum against changes in roll, yaw, or both axes simultaneously. They had either vestibular information, visual information, or both. The visual stimulus was a random-dot cloud with limited life-time dots and an artificial horizon in order to match the character of the vestibular stimulus (absolute positional information for roll, but only information about changing of position for yaw). Subject performed a pre-test, six training sessions, and a post-test. In the pre- and post-test sections, the subjects had to perform a stabilization task for all nine possible conditions (each lasting 200 seconds). For the training section, the four subjects were divided into two groups receiving visual or vestibular input (VISGroup and VESTGroup, respectively). During the training section, the performance of all subjects showed a large overall improvement. In the in pre- and post-test of the yaw stabilization task, subjects performance (mean absolute positional error) was much better with visual than with vestibular stimulus (pre-test: vestibular 6.00°, visual 2.99°, t(3)=7.32, p<0.005; post-test: vestibular 5.18°, visual 1.64°, t(3)=6.83, p<0.006). For the roll task, all subjects had a much higher increase in performance with the vestibular than with the visual stimulus (vestibular 2.73°, visual 0.98° decrease of the average absolute position from pre- to post-test, t(3)= 6.55, p<0.007). Finally, the VESTGroup showed a significant improvement in the visual roll task (pre-test 3.02°, post-test 1.73° standard deviation, t(1)=14.8, p<0.043). The VISGroup showed also a large but non-significant improvement in the vestibular roll task (pre test 4.26°, post test 2.19° standard deviation). This suggests that subjects are able to transfer their learned skill from one input modality to another.

Details

show
hide
Language(s):
 Dates: 2001-03
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: URI: http://www.twk.tuebingen.mpg.de/twk01/Psenso.htm
BibTex Citekey: 66
 Degree: -

Event

show
hide
Title: 4. Tübinger Wahrnehmungskonferenz (TWK 2001)
Place of Event: Tübingen, Germany
Start-/End Date: -

Legal Case

show

Project information

show

Source

show