English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  FaceVR: Real-Time Facial Reenactment and Eye Gaze Control in Virtual Reality

Thies, J., Zollhöfer, M., Stamminger, M., Theobalt, C., & Nießner, M. (2016). FaceVR: Real-Time Facial Reenactment and Eye Gaze Control in Virtual Reality. Retrieved from http://arxiv.org/abs/1610.03151.

Item is

Files

show Files
hide Files
:
arXiv:1610.03151.pdf (Preprint), 6MB
Name:
arXiv:1610.03151.pdf
Description:
File downloaded from arXiv at 2017-01-26 09:07
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Thies, Justus1, Author           
Zollhöfer, Michael2, Author           
Stamminger, Marc1, Author           
Theobalt, Christian2, Author           
Nießner, Matthias1, Author           
Affiliations:
1External Organizations, ou_persistent22              
2Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              

Content

show
hide
Free keywords: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Abstract: We introduce FaceVR, a novel method for gaze-aware facial reenactment in the Virtual Reality (VR) context. The key component of FaceVR is a robust algorithm to perform real-time facial motion capture of an actor who is wearing a head-mounted display (HMD), as well as a new data-driven approach for eye tracking from monocular videos. In addition to these face reconstruction components, FaceVR incorporates photo-realistic re-rendering in real time, thus allowing artificial modifications of face and eye appearances. For instance, we can alter facial expressions, change gaze directions, or remove the VR goggles in realistic re-renderings. In a live setup with a source and a target actor, we apply these newly-introduced algorithmic components. We assume that the source actor is wearing a VR device, and we capture his facial expressions and eye movement in real-time. For the target video, we mimic a similar tracking process; however, we use the source input to drive the animations of the target video, thus enabling gaze-aware facial reenactment. To render the modified target video on a stereo display, we augment our capture and reconstruction process with stereo data. In the end, FaceVR produces compelling results for a variety of applications, such as gaze-aware facial reenactment, reenactment in virtual reality, removal of VR goggles, and re-targeting of somebody's gaze direction in a video conferencing call.

Details

show
hide
Language(s): eng - English
 Dates: 2016-10-102016
 Publication Status: Published online
 Pages: 13 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 1610.03151
URI: http://arxiv.org/abs/1610.03151
BibTex Citekey: thies16FaceVR
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show