English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

InverseFaceNet: Deep Single-Shot Inverse Face Rendering From A Single Image

MPS-Authors
/persons/resource/persons127713

Kim,  Hyeongwoo
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons136490

Zollhöfer,  Michael
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons206546

Tewari,  Ayush
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian       
Computer Graphics, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:1703.10956.pdf
(Preprint), 5MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Kim, H., Zollhöfer, M., Tewari, A., Thies, J., Richardt, C., & Theobalt, C. (2017). InverseFaceNet: Deep Single-Shot Inverse Face Rendering From A Single Image. Retrieved from http://arxiv.org/abs/1703.10956.


Cite as: https://hdl.handle.net/11858/00-001M-0000-002D-8BCD-B
Abstract
We introduce InverseFaceNet, a deep convolutional inverse rendering framework for faces that jointly estimates facial pose, shape, expression, reflectance and illumination from a single input image in a single shot. By estimating all these parameters from just a single image, advanced editing possibilities on a single face image, such as appearance editing and relighting, become feasible. Previous learning-based face reconstruction approaches do not jointly recover all dimensions, or are severely limited in terms of visual quality. In contrast, we propose to recover high-quality facial pose, shape, expression, reflectance and illumination using a deep neural network that is trained using a large, synthetically created dataset. Our approach builds on a novel loss function that measures model-space similarity directly in parameter space and significantly improves reconstruction accuracy. In addition, we propose an analysis-by-synthesis breeding approach which iteratively updates the synthetic training corpus based on the distribution of real-world images, and we demonstrate that this strategy outperforms completely synthetically trained networks. Finally, we show high-quality reconstructions and compare our approach to several state-of-the-art approaches.