Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

 
 
DownloadE-Mail
  Physical Self-Motion Facilitates Object Recognition, but Does Not Enable View-Independence

Teramoto, W., & Riecke, B. (2007). Physical Self-Motion Facilitates Object Recognition, but Does Not Enable View-Independence. Poster presented at 10th Tübinger Wahrnehmungskonferenz (TWK 2007), Tübingen, Germany.

Item is

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Teramoto, W1, Autor           
Riecke, BE1, Autor           
Affiliations:
1Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: It is well known that people have difficulties in recognizing an object from novel views as compared to learned views, resulting in increased response times and/or errors. This so-called view-dependency has been confirmed by many studies. In the natural environment, however, there are two ways of changing views of an object: one is to rotate an object in front of a stationary observer (object-movement), the other is for the observer to move around a stationary object (observer-movement). Simons et al. [1] criticized previous studies in this regard and examined the difference between object- and observer-movement directly. As a result, Simons et al. reported the elimination of this view-dependency when novel views resulted from observer-movement, instead of object-movement. They suggest the contribution of extraretinal (vestibular and proprioceptive) information to object recognition. Recently, however, Zhao et al. [2] reported that the observer’s movement from one view to another only decreased view-dependency without fully eliminating it. Furthermore, even this effect vanished for rotations of 90 instead of 50. The aim of the present study was to confirm the phenomenon in our virtual reality environment and to clarify the underlying mechanism further by using larger angles of view change (45-180, in 45 steps). Two experiments were conducted using an eMagin Z800 3D Visor head-mounted display that was tracked by 16 Vicon MX 13 motion capture cameras. Observers performed sequential-matching tasks. Five novel objects and five mirror-reversed versions of these objects were created by smoothing the edges of Shepard- Metzler’s objects. A mirror-reflected version of the learned object was used as a distractor in Experiment 1 (N=13), whereas one of the other (i.e., not mirror-reversed) objects was randomly selected on each trial as a distractor in Experiment 2 (N=15). Test views of the objects were manipulated either by viewer or object movement. Both experiments showed a significant overall advantage of viewer movements over object movements. Note, however, that performance was still viewpoint-dependent. These results suggest an involvement of partially advantageous and cost-effective transformation mechanisms, but not a complete automatic spatial-updating mechanism as proposed by Simons et al. [1], when observers move.

Details

einblenden:
ausblenden:
Sprache(n):
 Datum: 2007-07
 Publikationsstatus: Erschienen
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: 10th Tübinger Wahrnehmungskonferenz (TWK 2007)
Veranstaltungsort: Tübingen, Germany
Start-/Enddatum: -

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: