日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

成果報告書

Forecasting User Attention During Everyday Mobile Interactions Using Device-Integrated and Wearable Sensors

MPS-Authors
/persons/resource/persons127686

Steil,  Julian
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

/persons/resource/persons101854

Müller,  Philipp
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

/persons/resource/persons86799

Bulling,  Andreas
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

External Resource
There are no locators available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)

arXiv:1801.06011.pdf
(プレプリント), 6MB

付随資料 (公開)
There is no public supplementary material available
引用

Steil, J., Müller, P., Sugano, Y., & Bulling, A. (2018). Forecasting User Attention During Everyday Mobile Interactions Using Device-Integrated and Wearable Sensors. Retrieved from http://arxiv.org/abs/1801.06011.


引用: https://hdl.handle.net/21.11116/0000-0001-1834-A
要旨
Users' visual attention is highly fragmented during mobile interactions but the erratic nature of these attention shifts currently limits attentive user interfaces to adapt after the fact, i.e. after shifts have already happened, thereby severely limiting the adaptation capabilities and user experience. To address these limitations, we study attention forecasting -- the challenging task of predicting whether users' overt visual attention (gaze) will shift between a mobile device and environment in the near future or how long users' attention will stay in a given location. To facilitate the development and evaluation of methods for attention forecasting, we present a novel long-term dataset of everyday mobile phone interactions, continuously recorded from 20 participants engaged in common activities on a university campus over 4.5 hours each (more than 90 hours in total). As a first step towards a fully-fledged attention forecasting interface, we further propose a proof-of-concept method that uses device-integrated sensors and body-worn cameras to encode rich information on device usage and users' visual scene. We demonstrate the feasibility of forecasting bidirectional attention shifts between the device and the environment as well as for predicting the first and total attention span on the device and environment using our method. We further study the impact of different sensors and feature sets on performance and discuss the significant potential but also remaining challenges of forecasting user attention during mobile interactions.