de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

Automatic synthesis of sequences of human movements by linear combination of learned example patterns

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons84787

Giese,  MA
Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84018

Knappmeyer,  B
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Giese, M., Knappmeyer, B., & Bülthoff, H. (2002). Automatic synthesis of sequences of human movements by linear combination of learned example patterns. Biologically motivated computer vision LNCS2525, 538-547.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-DE50-2
Zusammenfassung
We present a method for the synthesis of sequences of realistically looking human movements from learned example patterns. We apply this technique for the synthesis of dynamic facial expressions. Sequences of facial movements are decomposed into individual movement elements which are modeled by linear combinations of learned examples. The weights of the linear combinations define an abstract pattern space that permits a simple modification and parameterization of the style of the individual movement elements. The elements are defined in a way that is suitable for a simple automatic resynthesis of longer sequences from movement elements with different styles. We demonstrate the efficiency of this technique for the animation of a 3D head model and discuss how it can be used to generate spatio-temporally exaggerated sequences of facial expressions for psychophysical experiments on caricature effects.