de.mpg.escidoc.pubman.appbase.FacesBean
English
 
Help Guide Disclaimer Contact us Login
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Recognizing Dynamic Object Across Viepoints

MPS-Authors
http://pubman.mpdl.mpg.de/cone/persons/resource/persons83861

Chuang,  L
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84291

Vuong,  QC
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84258

Thornton,  IM
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Chuang, L., Vuong, Q., Thornton, I., & Bülthoff, H. (2006). Recognizing Dynamic Object Across Viepoints. Poster presented at 9th Tübingen Perception Conference (TWK 2006), Tübingen, Germany.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0013-D2AB-7
Abstract
Recognizing objects across viewpoints presents the visual system with an extremely challenging task. This would be particularly true if learned representations were solely determined by spatial properties. However, a number of recent studies have shown that observers are also highly sensitive to characteristic object motion. Could the availability of characteristic spatial-temporal patterns in the natural environment help explain the ability to generalise across viewpoints? Here, we examined how familiar object motion (both rigid and nonrigid) improves object recognition across different viewpoints. In both experiments, participants were first familiarised with two novel dynamic objects from a fixed viewpoint. These objects presented the observer with a coherent sequence of change that had a unique temporal order, resulting from either rotating a rigid object about the horizontal axis (Experiment 1) or through a characteristic deformation of a nonrigid object (Experiment 2). Subsequently, participants were tested for their ability to discriminate these learned objects from new distractors using a 2-interval-forced-choice task. During test, objects were presented at 0°, 10°, 20° and 30° around the vertical axis relative to the learned viewpoint, and in the learned or reversed temporal order. Motion reversal is a common manipulation used to disrupt spatiotemporal properties, without interfering with the object’s spatial characteristics. In both experiments, accuracy decreased with increasing variance from the learned viewpoint. Nonetheless, objects were consistently better recognised when presented in the learned motion sequence (mean accuracy: Expt 1 = 86; Expt 2 = 81)compared to the reverse motion condition (mean accuracy: Expt 1 = 81; Expt 2 = 76), across all viewpoints tested (Expt 1: F(1,23)=13.94, p<0.01; Expt 2: F(1,23)=8.78, p<0.01). These results indicate that both rigid and non-rigid motion facilitated object recognition despite disturbances in 2D shape by viewpoint changes.