Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

 
 
DownloadE-Mail
  It's Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation

Zhang, X., Sugano, Y., Fritz, M., & Bulling, A. (2016). It's Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation. Retrieved from http://arxiv.org/abs/1611.08860.

Item is

Dateien

einblenden: Dateien
ausblenden: Dateien
:
arXiv:1611.08860.pdf (Preprint), 7MB
Name:
arXiv:1611.08860.pdf
Beschreibung:
File downloaded from arXiv at 2016-11-30 14:28
OA-Status:
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Zhang, Xucong1, Autor           
Sugano, Yusuke1, Autor           
Fritz, Mario1, Autor           
Bulling, Andreas1, Autor           
Affiliations:
1Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society, ou_1116547              

Inhalt

einblenden:
ausblenden:
Schlagwörter: Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Human-Computer Interaction, cs.HC
 Zusammenfassung: While appearance-based gaze estimation methods have traditionally exploited information encoded solely from the eyes, recent results from a multi-region method indicated that using the full face image can benefit performance. Pushing this idea further, we propose an appearance-based method that, in contrast to a long-standing line of work in computer vision, only takes the full face image as input. Our method encodes the face image using a convolutional neural network with spatial weights applied on the feature maps to flexibly suppress or enhance information in different facial regions. Through evaluation on the recent MPIIGaze and EYEDIAP gaze estimation datasets, we show that our full-face method significantly outperforms the state of the art for both 2D and 3D gaze estimation, achieving improvements of up to 14.3% on MPIIGaze and 27.7% on EYEDIAP for person-independent 3D gaze estimation. We further show that this improvement is consistent across different illumination conditions and gaze directions and particularly pronounced for the most challenging extreme head poses.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2016-11-272016
 Publikationsstatus: Online veröffentlicht
 Seiten: 10 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: arXiv: 1611.08860
URI: http://arxiv.org/abs/1611.08860
BibTex Citekey: Zhang1611.08860
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: