de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

EyeContext: Recognition of High-level Contextual Cues from Human Visual Behaviour

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons86799

Bulling,  Andreas
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Bulling, A., Weichel, C., & Gellersen, H. (2013). EyeContext: Recognition of High-level Contextual Cues from Human Visual Behaviour. In S. Bødker, S. Brewster, P. Baudisch, M. Beaudouin-Lafon, & W. E. Mackay (Eds.), CHI 2013 (pp. 305-308). New York, NY: ACM. doi:10.1145/2470654.2470697.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0018-4C91-2
Zusammenfassung
Automatic annotation of life logging data is challenging. In this work we present EyeContext, a system to infer high-level contextual cues from human visual behaviour. We conduct a user study to record eye movements of four participants over a full day of their daily life, totalling 42.5 hours of eye movement data. Participants were asked to self-annotate four non-mutually exclusive cues: social (interacting with somebody vs. no interaction), cognitive (concentrated work vs. leisure), physical (physically active vs. not active), and spatial (inside vs. outside a building). We evaluate a proof-of-concept EyeContext system that combines encoding of eye movements into strings and a spectrum string kernel support vector machine (SVM) classifier. Using person-dependent training, we obtain a top performance of 85.3% precision (98.0% recall) for recognising social interactions. Our results demonstrate the large information content available in long-term human visual behaviour and opens up new venues for research on eye-based behavioural monitoring and life logging.