日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細

登録内容を編集ファイル形式で保存
 
 
ダウンロード電子メール
  MPI CyberMotion Simulator: Implementation of a Novel Motion Simulator to Investigate Multisensory Path Integration in Three Dimensions

Barnett-Cowan, M., Meilinger, T., Vidal, M., Teufel, H., & Bülthoff, H. (2012). MPI CyberMotion Simulator: Implementation of a Novel Motion Simulator to Investigate Multisensory Path Integration in Three Dimensions. Journal of Visualized Experiments, 63(5), 1-6. doi:10.3791/3436.

Item is

基本情報

表示: 非表示:
資料種別: 学術論文

ファイル

表示: ファイル

関連URL

表示:

作成者

表示:
非表示:
 作成者:
Barnett-Cowan, M1, 著者           
Meilinger, T1, 著者           
Vidal, M1, 著者           
Teufel, H1, 著者           
Bülthoff, HH1, 著者           
所属:
1Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              

内容説明

表示:
非表示:
キーワード: -
 要旨: Path integration is a process in which self-motion is integrated over time to obtain an estimate of one's current position relative to a starting point 1. Humans can do path integration based exclusively on visual 2-3, auditory 4, or inertial cues 5. However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate 6-7. In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones 5. Movement through physical space therefore does not seem to be accurately represented by the brain. Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see 3 for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator 8-9 with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed. 16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibular-kinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s2 peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen. Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is non-symmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.

資料詳細

表示:
非表示:
言語:
 日付: 2012-05
 出版の状態: 出版
 ページ: -
 出版情報: -
 目次: -
 査読: -
 識別子(DOI, ISBNなど): URI: http://www.jove.com/pdf/default.aspx?PDF=ID=3436
DOI: 10.3791/3436
BibTex参照ID: BarnettCowanMVTB2011
 学位: -

関連イベント

表示:

訴訟

表示:

Project information

表示:

出版物 1

表示:
非表示:
出版物名: Journal of Visualized Experiments
種別: 学術雑誌
 著者・編者:
所属:
出版社, 出版地: -
ページ: - 巻号: 63 (5) 通巻号: - 開始・終了ページ: 1 - 6 識別子(ISBN, ISSN, DOIなど): -