de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

A visual search advantage for faces learned in motion

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons84141

Pilz,  K
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84258

Thornton,  IM
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Pilz, K., Thornton, I., & Bülthoff, H. (2005). A visual search advantage for faces learned in motion. Poster presented at Fifth Annual Meeting of the Vision Sciences Society (VSS 2005), Sarasota, FL, USA.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-D441-3
Zusammenfassung
Recently there has been growing interest in the role that motion might play in the perception and representation of facial identity. Most studies have considered old/new recognition as a task. However, especially for non-rigid motion, these studies have often produced contradictory results. Here, we used a delayed visual search paradigm to explore how learning is affected by non-rigid facial motion. In an incidental learning phase, two faces were sequentially shown for an extended period of time. One face was presented moving non-rigidly and the other as a static picture. After a delay of several minutes observers (N=18) were asked to indicate the presence or absence of the target faces among unfamiliar distractor faces, using identical static search arrays. Although undegraded facial stimuli were used at both study and test and the search arrays were identical, faces that had been learned in motion were identified almost 300 ms faster than faces learned as static snapshots. In a second experiment we examined a familiar kind of rigid motion. Stimuli consisted of 3D heads from the MPI database, placed on an avatar body. The figures were animated so as to approach the observer in depth. In this experiment we explicitly compared performance on visual search and old/new recognition tasks (N=22). Again with visual search, observers were significantly faster in detecting the face of the individual learned in motion. Using several variants of old/new recognition tasks, we were unable to detect a difference between moving and static conditions. Taken together the visual search results of both experiments provide clear evidence that motion can affect identity decisions across extended periods of time. Additionally, it seems clear that such effects may be difficult to observe using more traditional old/new recognition tasks. Possibly the list-learning aspects of these methods encourage coding strategies that are simply not appropriate for use with dynamic stimuli.