de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Motion matters: Facial motion improves delayed visual search performance

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons84141

Pilz,  K
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84258

Thornton,  IM
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Pilz, K., Thornton, I., & Bülthoff, H. (2005). Motion matters: Facial motion improves delayed visual search performance. Poster presented at 8th Tübingen Perception Conference (TWK 2005), Tübingen, Germany.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-D653-9
Zusammenfassung
In the last few years there has been growing interest in the role that motion might play in the perception and representation of facial identity. In separate experiments, we explored how learning is affected by two different types of movement: the non-rigid motion of the face, typically associated with expression and communication, and the equally familiar rigid motion that occurs whenever a person approaches you in depth. Traditional old/new recognition tasks have previously yielded mixed results in the context of facial motion. Therefore, we decided to use a delayed visual search paradigm. Observers were familiarised with two target individuals, one seen in motion, the other via static snapshots. All images were non-degraded and consisted of video sequences in Experiment 1 and 3D heads on a walking avatar body in Experiment 2. After a delay of several minutes, observers were asked to search for their targets in static search arrays. Crucially, during this test phase, the static search arrays were identical regardless of how the face was first seen during learning. Nevertheless, in both experiments, faces that were learned in motion were found more quickly and more accurately than faces that had been learned from snapshots. These findings provide further evidence that facial motion can affect identity decisions, both in the presence of intact form cues and across extended periods of time.