de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

Semantic 3D motion retargeting for facial animation

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons83871

Curio,  C
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83829

Breidt,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84016

Kleiner,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84291

Vuong,  QC
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84787

Giese,  MA
Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Curio, C., Breidt, M., Kleiner, M., Vuong, Q., Giese, M., & Bülthoff, H. (2006). Semantic 3D motion retargeting for facial animation. In 3rd Symposium on Applied Perception in Graphics and Visualization (APGV 2006) (pp. 77-84). New York, NY, USA: ACM Press.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-D0F7-B
Zusammenfassung
We present a system for realistic facial animation that decomposes facial Motion Capture data into semantically meaningful motion channels based on the Facial Action Coding System. A captured performance is retargeted onto a morphable 3D face model based on a semantically corresponding set of 3D scans. The resulting facial animation reveals a high level of realism by combining the high spatial resolution of a 3D scanner with the high temporal accuracy of motion capture data that accounts for subtle facial movements with sparse measurements. Such an animation system allows us to systematically investigate human perception of moving faces. It offers control over many aspects of the appearance of a dynamic face, while utilizing as much measured data as possible to avoid artistic biases. Using our animation system, we report results of an experiment that investigates the perceived naturalness of facial motion in a preference task. For expressions with small amounts of headmotion, we find a benefit for our part-based generative animation system that is capable of local animation over an example-based approach that deforms the whole face at once.