Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Personal Exploratory Experience of an Object Facilitates Its Subsequent Recognition

MPG-Autoren
/persons/resource/persons83861

Chuang,  LL
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Chuang, L., Vuong, Q., Thornton, I., & Bülthoff, H. (2007). Personal Exploratory Experience of an Object Facilitates Its Subsequent Recognition. Poster presented at 10th Tübinger Wahrnehmungskonferenz (TWK 2007), Tübingen, Germany.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-CD05-2
Zusammenfassung
Current research shows that human object recognition is sensitive to the learned order of familiar object views (e.g. [1]). This temporal order of views could be determined by how an observer manipulates an object during learning e.g., rigid rotations in depth. In fact, the freedom to manipulate objects during learning is also known to improve subsequent recognition from single static images [2]. In this study, sixteen participants learned novel 3D amoeboid objects by manipulating them in a virtual reality environment. This required the use of a marker tracking system (VICON) and a head-mounted display (z800 3DVisor eMagin). Our participants handled a tracked device whose spatial coordinates, relative to the observers’ viewpoint, determined the position and orientation of a virtual object that was presented via the head-mounted display. Hence, this device acted as a physical substitute for the virtual object and its coordinates were recorded as motion trajectories. In a subsequent old/new recognition test, participants either actively explored or passively viewed old (learned) and new objects in the same setup. Generally, “active” participants performed better than “passive” participants (in terms of sensitivity: d’=1.08 vs. 0.84 respectively). Nonetheless, passive viewing of learned objects that were animated with their learned motion trajectories resulted in comparably good performance (d’=1.13). The performance decrease was specific to passively viewing learned objects that either had their learned motion trajectories temporally reversed (d’=0.69) or followed another observer’s motion trajectories (d’=0.70). Therefore, object recognition performance from passively viewing one’s past explorations of the learned object is comparable to actively exploring the learned object itself. These results provide further support for a dependence on temporal ordering of views during object recognition. Finally, these results could also be considered in the context of studies that highlight the human ability of discriminating one’s own actions from other people’s actions e.g., hand gestures, handwriting, dart-throwing, full-body walking and ballet (for discussion and examples, see [3]). Here, our study also showed better recognition from viewing videos of self-generated actions. Nonetheless, this recognition benefit was specifically for the learned objects, which were not concretely embodied in the observer’s person. Moreover, animating new objects with the participants’ own actions did not increase their familiarity. We conclude by suggesting that our observers’ did not merely show a familiarity with their past actions but rather, with the idiosyncratic visual experiences that their own actions created.