de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Forschungspapier

It's Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons101747

Zhang,  Xucong
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons134261

Sugano,  Yusuke
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons44451

Fritz,  Mario
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons86799

Bulling,  Andreas
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)

arXiv:1611.08860.pdf
(Preprint), 7MB

Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Zhang, X., Sugano, Y., Fritz, M., & Bulling, A. (2016). It's Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation. Retrieved from http://arxiv.org/abs/1611.08860.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-002C-0EF5-7
Zusammenfassung
While appearance-based gaze estimation methods have traditionally exploited information encoded solely from the eyes, recent results from a multi-region method indicated that using the full face image can benefit performance. Pushing this idea further, we propose an appearance-based method that, in contrast to a long-standing line of work in computer vision, only takes the full face image as input. Our method encodes the face image using a convolutional neural network with spatial weights applied on the feature maps to flexibly suppress or enhance information in different facial regions. Through evaluation on the recent MPIIGaze and EYEDIAP gaze estimation datasets, we show that our full-face method significantly outperforms the state of the art for both 2D and 3D gaze estimation, achieving improvements of up to 14.3% on MPIIGaze and 27.7% on EYEDIAP for person-independent 3D gaze estimation. We further show that this improvement is consistent across different illumination conditions and gaze directions and particularly pronounced for the most challenging extreme head poses.