de.mpg.escidoc.pubman.appbase.FacesBean
English
 
Help Guide Disclaimer Contact us Login
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

End-to-End Eye Movement Detection Using Convolutional Neural Networks

MPS-Authors
http://pubman.mpdl.mpg.de/cone/persons/resource/persons79287

Hoppe,  Sabrina
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons86799

Bulling,  Andreas
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)

arXiv:1609.02452.pdf
(Preprint), 3MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Hoppe, S., & Bulling, A. (2016). End-to-End Eye Movement Detection Using Convolutional Neural Networks. Retrieved from http://arxiv.org/abs/1609.02452.


Cite as: http://hdl.handle.net/11858/00-001M-0000-002B-AC6A-B
Abstract
Common computational methods for automated eye movement detection - i.e. the task of detecting different types of eye movement in a continuous stream of gaze data - are limited in that they either involve thresholding on hand-crafted signal features, require individual detectors each only detecting a single movement, or require pre-segmented data. We propose a novel approach for eye movement detection that only involves learning a single detector end-to-end, i.e. directly from the continuous gaze data stream and simultaneously for different eye movements without any manual feature crafting or segmentation. Our method is based on convolutional neural networks (CNN) that recently demonstrated superior performance in a variety of tasks in computer vision, signal processing, and machine learning. We further introduce a novel multi-participant dataset that contains scripted and free-viewing sequences of ground-truth annotated saccades, fixations, and smooth pursuits. We show that our CNN-based method outperforms state-of-the-art baselines by a large margin on this challenging dataset, thereby underlining the significant potential of this approach for holistic, robust, and accurate eye movement protocol analysis.