de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Zeitschriftenartikel

Predicting point-light actions in real-time

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons83944

Graf,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84787

Reitzner B, Corves C, Casile A, Giese,  M
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Graf, M., Reitzner B, Corves C, Casile A, Giese, M., & Prinz, W. (2007). Predicting point-light actions in real-time. NeuroImage, 36(Supplement 2), T22-T32. doi:10.1016/j.neuroimage.2007.03.017.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-CDBF-2
Zusammenfassung
There is convincing evidence for a mirror system in humans which simulates actions of conspecifics. One possible purpose of such a simulation system is to support action prediction in real-time. Our goal was to study whether the prediction of actions involves a real-time simulation process. We motion-captured a number of human actions and rendered them as previous termpoint-lightnext term action sequences. Observers perceived brief videos of these actions, followed by an occluder and a static test posture. We independently varied the occluder time and the movement gap (i.e., the time between the endpoint of the action and the test posture). Observers were required to judge whether the test stimulus depicted a continuation of the action in the same depth orientation. Prediction performance was best when occluder time and movement gap corresponded, i.e., when the test posture was a continuation of the sequence that matched the occluder duration (Experiments 1, 2 and 4). This pattern of results was destroyed when the sequences and test images were flipped around the horizontal axis (Experiment 3). Overall, our findings suggest that action prediction involves a simulation process that operates in real-time. This process can break down when the actions are presented under viewing conditions for which observers have little experience.