Help Guide Disclaimer Contact us Login
  Advanced SearchBrowse




Conference Paper

EyeContext: Recognition of High-level Contextual Cues from Human Visual Behaviour


Bulling,  Andreas
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available

Bulling, A., Weichel, C., & Gellersen, H. (2013). EyeContext: Recognition of High-level Contextual Cues from Human Visual Behaviour. In S. Bødker, S. Brewster, P. Baudisch, M. Beaudouin-Lafon, & W. E. Mackay (Eds.), CHI 2013 (pp. 305-308). New York, NY: ACM. doi:10.1145/2470654.2470697.

Cite as:
Automatic annotation of life logging data is challenging. In this work we present EyeContext, a system to infer high-level contextual cues from human visual behaviour. We conduct a user study to record eye movements of four participants over a full day of their daily life, totalling 42.5 hours of eye movement data. Participants were asked to self-annotate four non-mutually exclusive cues: social (interacting with somebody vs. no interaction), cognitive (concentrated work vs. leisure), physical (physically active vs. not active), and spatial (inside vs. outside a building). We evaluate a proof-of-concept EyeContext system that combines encoding of eye movements into strings and a spectrum string kernel support vector machine (SVM) classifier. Using person-dependent training, we obtain a top performance of 85.3% precision (98.0% recall) for recognising social interactions. Our results demonstrate the large information content available in long-term human visual behaviour and opens up new venues for research on eye-based behavioural monitoring and life logging.