de.mpg.escidoc.pubman.appbase.FacesBean
English
 
Help Guide Disclaimer Contact us Login
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Understanding Objects and Actions: a VR Experiment

MPS-Authors
http://pubman.mpdl.mpg.de/cone/persons/resource/persons84298

Wallraven,  C
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84200

Schultze,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84088

Mohler,  B
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84285

Volkova,  E
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83780

Alexandrova,  I
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84274

Vatakis,  A
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Wallraven, C., Schultze, M., Mohler, B., Volkova, E., Alexandrova, I., Vatakis, A., et al. (2010). Understanding Objects and Actions: a VR Experiment. Poster presented at 2010 Joint Virtual Reality Conference of EuroVR - EGVE - VEC (JVRC 2010), Stuttgart, Germany.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0013-BE90-E
Abstract
The human capability to interpret actions and to recognize objects is still far ahead of that of any technical system. Thus, a deeper understanding of how humans are able to interpret human (inter)actions lies at the core of building better artificial cognitive systems. Here, we present results from a first series of perceptual experiments that show how humans are able to infer scenario classes, as well as individual actions and objects from computer animations of everyday situations. The animations were created from a unique corpus of real-life recordings made in the European project POETICON using motion-capture technology and advanced VR programming that allowed for full control over all aspects of the finally rendered data.