English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  InverseFaceNet: Deep Single-Shot Inverse Face Rendering From A Single Image

Kim, H., Zollhöfer, M., Tewari, A., Thies, J., Richardt, C., & Theobalt, C. (2017). InverseFaceNet: Deep Single-Shot Inverse Face Rendering From A Single Image. Retrieved from http://arxiv.org/abs/1703.10956.

Item is

Basic

show hide
Genre: Paper
Latex : {InverseFaceNet}: {D}eep Single-Shot Inverse Face Rendering From A Single Image

Files

show Files
hide Files
:
arXiv:1703.10956.pdf (Preprint), 5MB
Name:
arXiv:1703.10956.pdf
Description:
File downloaded from arXiv at 2017-07-05 12:41
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Kim, Hyeongwoo1, Author           
Zollhöfer, Michael1, Author           
Tewari, Ayush1, Author           
Thies, Justus2, Author
Richardt, Christian2, Author
Theobalt, Christian1, Author           
Affiliations:
1Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              
2External Organizations, ou_persistent22              

Content

show
hide
Free keywords: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Abstract: We introduce InverseFaceNet, a deep convolutional inverse rendering framework for faces that jointly estimates facial pose, shape, expression, reflectance and illumination from a single input image in a single shot. By estimating all these parameters from just a single image, advanced editing possibilities on a single face image, such as appearance editing and relighting, become feasible. Previous learning-based face reconstruction approaches do not jointly recover all dimensions, or are severely limited in terms of visual quality. In contrast, we propose to recover high-quality facial pose, shape, expression, reflectance and illumination using a deep neural network that is trained using a large, synthetically created dataset. Our approach builds on a novel loss function that measures model-space similarity directly in parameter space and significantly improves reconstruction accuracy. In addition, we propose an analysis-by-synthesis breeding approach which iteratively updates the synthetic training corpus based on the distribution of real-world images, and we demonstrate that this strategy outperforms completely synthetically trained networks. Finally, we show high-quality reconstructions and compare our approach to several state-of-the-art approaches.

Details

show
hide
Language(s): eng - English
 Dates: 2017-03-312017
 Publication Status: Published online
 Pages: 10 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 1703.10956
URI: http://arxiv.org/abs/1703.10956
BibTex Citekey: DBLP:journals/corr/KimZTTRT17
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show