de.mpg.escidoc.pubman.appbase.FacesBean
English
 
Help Guide Disclaimer Contact us Login
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Automatic synthesis of sequences of human movements by linear combination of learned example patterns

MPS-Authors
http://pubman.mpdl.mpg.de/cone/persons/resource/persons84787

Giese,  MA
Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84018

Knappmeyer,  B
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Giese, M., Knappmeyer, B., & Bülthoff, H. (2002). Automatic synthesis of sequences of human movements by linear combination of learned example patterns. Biologically motivated computer vision LNCS2525, 538-547.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0013-DE50-2
Abstract
We present a method for the synthesis of sequences of realistically looking human movements from learned example patterns. We apply this technique for the synthesis of dynamic facial expressions. Sequences of facial movements are decomposed into individual movement elements which are modeled by linear combinations of learned examples. The weights of the linear combinations define an abstract pattern space that permits a simple modification and parameterization of the style of the individual movement elements. The elements are defined in a way that is suitable for a simple automatic resynthesis of longer sequences from movement elements with different styles. We demonstrate the efficiency of this technique for the animation of a 3D head model and discuss how it can be used to generate spatio-temporally exaggerated sequences of facial expressions for psychophysical experiments on caricature effects.