de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Predicting point-light actions in real-time

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons83944

Graf,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84787

Reitzner B, Giese,  M
Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84754

Casile,  A
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Graf, M., Reitzner B, Giese, M., Casile, A., & Prinz, W. (2006). Predicting point-light actions in real-time. Poster presented at 6th Annual Meeting of the Vision Sciences Society (VSS 2006), Sarasota, FL, USA.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-D187-0
Zusammenfassung
Evidence has accumulated for a mirror system in humans which simulates actions of conspecifics (Wilson amp; Knoblich, 2005). One likely purpose of such a simulation system is to support action prediction. We focused on the time-course of action prediction, investigating whether the prediction of actions involves a real-time simulation process. We motion-captured a number of human actions and rendered them as point light action sequences. In the experiments, we presented brief videos of human actions, followed by an occluder and a static test stimulus. Both the occluder duration (SOA of 100, 400, or 700 ms) and the distance of the test stimulus to the endpoint of the action sequence (corresponding to 100, 400, or 700 ms) were varied independently. Subjects had to judge whether the test stimulus depicted a continuation of the action in the same orientation, or whether the test stimulus was presented in a different orientation in depth as the previous action sequence. Prediction accuracy was best when SOA and distance to the endpoint corresponded, i.e. when the test image was a continuation of the sequence that matched the occluder duration. This pattern of results was destroyed when the sequences and test images were inverted (flipped around the horizontal axis). In this case, performance simply deteriorated with increasing distance to the end of the sequence. Overall, our findings suggest that action prediction involves a real-time simulation process. This process can break down when the actions are presented under viewing conditions for which we have little experience.