English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

PrivacEye: Privacy-Preserving First-Person Vision Using Image Features and Eye Movement Analysis

MPS-Authors
/persons/resource/persons127686

Steil,  Julian
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

/persons/resource/persons86799

Bulling,  Andreas
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:1801.04457.pdf
(Preprint), 5MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Steil, J., Koelle, M., Heuten, W., Boll, S., & Bulling, A. (2018). PrivacEye: Privacy-Preserving First-Person Vision Using Image Features and Eye Movement Analysis. Retrieved from http://arxiv.org/abs/1801.04457.


Cite as: https://hdl.handle.net/21.11116/0000-0001-1840-C
Abstract
As first-person cameras in head-mounted displays become increasingly prevalent, so does the problem of infringing user and bystander privacy. To address this challenge, we present PrivacEye, a proof-of-concept system that detects privacysensitive everyday situations and automatically enables and disables the first-person camera using a mechanical shutter. To close the shutter, PrivacEye detects sensitive situations from first-person camera videos using an end-to-end deep-learning model. To open the shutter without visual input, PrivacEye uses a separate, smaller eye camera to detect changes in users' eye movements to gauge changes in the "privacy level" of the current situation. We evaluate PrivacEye on a dataset of first-person videos recorded in the daily life of 17 participants that they annotated with privacy sensitivity levels. We discuss the strengths and weaknesses of our proof-of-concept system based on a quantitative technical evaluation as well as qualitative insights from semi-structured interviews.