de.mpg.escidoc.pubman.appbase.FacesBean
English
 
Help Guide Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Predicting point-light actions in real-time

MPS-Authors
http://pubman.mpdl.mpg.de/cone/persons/resource/persons83944

Graf,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84787

Reitzner B, Corves C, Casile A, Giese,  M
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Graf, M., Reitzner B, Corves C, Casile A, Giese, M., & Prinz, W. (2007). Predicting point-light actions in real-time. NeuroImage, 36(Supplement 2), T22-T32. doi:10.1016/j.neuroimage.2007.03.017.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0013-CDBF-2
Abstract
There is convincing evidence for a mirror system in humans which simulates actions of conspecifics. One possible purpose of such a simulation system is to support action prediction in real-time. Our goal was to study whether the prediction of actions involves a real-time simulation process. We motion-captured a number of human actions and rendered them as previous termpoint-lightnext term action sequences. Observers perceived brief videos of these actions, followed by an occluder and a static test posture. We independently varied the occluder time and the movement gap (i.e., the time between the endpoint of the action and the test posture). Observers were required to judge whether the test stimulus depicted a continuation of the action in the same depth orientation. Prediction performance was best when occluder time and movement gap corresponded, i.e., when the test posture was a continuation of the sequence that matched the occluder duration (Experiments 1, 2 and 4). This pattern of results was destroyed when the sequences and test images were flipped around the horizontal axis (Experiment 3). Overall, our findings suggest that action prediction involves a simulation process that operates in real-time. This process can break down when the actions are presented under viewing conditions for which observers have little experience.