English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Visual Decoding of Targets During Visual Search From Human Eye Fixations

Sattar, H., Fritz, M., & Bulling, A. (2017). Visual Decoding of Targets During Visual Search From Human Eye Fixations. Retrieved from http://arxiv.org/abs/1706.05993.

Item is

Files

show Files
hide Files
:
arXiv:1706.05993.pdf (Preprint), 4MB
Name:
arXiv:1706.05993.pdf
Description:
File downloaded from arXiv at 2017-07-05 11:05
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Sattar, Hosnieh1, Author           
Fritz, Mario1, Author           
Bulling, Andreas1, Author           
Affiliations:
1Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society, ou_1116547              

Content

show
hide
Free keywords: Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Human-Computer Interaction, cs.HC
 Abstract: What does human gaze reveal about a users' intents and to which extend can these intents be inferred or even visualized? Gaze was proposed as an implicit source of information to predict the target of visual search and, more recently, to predict the object class and attributes of the search target. In this work, we go one step further and investigate the feasibility of combining recent advances in encoding human gaze information using deep convolutional neural networks with the power of generative image models to visually decode, i.e. create a visual representation of, the search target. Such visual decoding is challenging for two reasons: 1) the search target only resides in the user's mind as a subjective visual pattern, and can most often not even be described verbally by the person, and 2) it is, as of yet, unclear if gaze fixations contain sufficient information for this task at all. We show, for the first time, that visual representations of search targets can indeed be decoded only from human gaze fixations. We propose to first encode fixations into a semantic representation and then decode this representation into an image. We evaluate our method on a recent gaze dataset of 14 participants searching for clothing in image collages and validate the model's predictions using two human studies. Our results show that 62% (Chance level = 10%) of the time users were able to select the categories of the decoded image right. In our second studies we show the importance of a local gaze encoding for decoding visual search targets of user

Details

show
hide
Language(s): eng - English
 Dates: 2017-06-192017-06-212017
 Publication Status: Published online
 Pages: 9 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 1706.05993
URI: http://arxiv.org/abs/1706.05993
BibTex Citekey: DBLP:journals/corr/SattarFB17
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show