Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Forschungspapier

Automatic Face Reenactment

MPG-Autoren
/persons/resource/persons127194

Garrido,  Pablo
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons73102

Valgaerts,  Levi
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons73104

Rehmsen,  Ole
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian       
Computer Graphics, MPI for Informatics, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)

arXiv:1602.02651.pdf
(Preprint), 299KB

Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Garrido, P., Valgaerts, L., Rehmsen, O., Thormählen, T., Perez, P., & Theobalt, C. (2016). Automatic Face Reenactment. Retrieved from http://arxiv.org/abs/1602.02651.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-002B-9A53-8
Zusammenfassung
We propose an image-based, facial reenactment system that replaces the face of an actor in an existing target video with the face of a user from a source video, while preserving the original target performance. Our system is fully automatic and does not require a database of source expressions. Instead, it is able to produce convincing reenactment results from a short source video captured with an off-the-shelf camera, such as a webcam, where the user performs arbitrary facial gestures. Our reenactment pipeline is conceived as part image retrieval and part face transfer: The image retrieval is based on temporal clustering of target frames and a novel image matching metric that combines appearance and motion to select candidate frames from the source video, while the face transfer uses a 2D warping strategy that preserves the user's identity. Our system excels in simplicity as it does not rely on a 3D face model, it is robust under head motion and does not require the source and target performance to be similar. We show convincing reenactment results for videos that we recorded ourselves and for low-quality footage taken from the Internet.