English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Attentive Explanations: Justifying Decisions and Pointing to the Evidence

Park, D. H., Hendricks, L. A., Akata, Z., Schiele, B., Darrell, T., & Rohrbach, M. (2016). Attentive Explanations: Justifying Decisions and Pointing to the Evidence. Retrieved from http://arxiv.org/abs/1612.04757.

Item is

Files

show Files
hide Files
:
arXiv:1612.04757.pdf (Preprint), 10MB
Name:
arXiv:1612.04757.pdf
Description:
File downloaded from arXiv at 2017-01-27 09:44
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Park, Dong Huk1, Author
Hendricks, Lisa Anne1, Author
Akata, Zeynep2, Author           
Schiele, Bernt2, Author           
Darrell, Trevor1, Author
Rohrbach, Marcus1, Author           
Affiliations:
1External Organizations, ou_persistent22              
2Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society, ou_1116547              

Content

show
hide
Free keywords: Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Artificial Intelligence, cs.AI,Computer Science, Computation and Language, cs.CL
 Abstract: Deep models are the defacto standard in visual decision models due to their impressive performance on a wide array of visual tasks. However, they are frequently seen as opaque and are unable to explain their decisions. In contrast, humans can justify their decisions with natural language and point to the evidence in the visual world which led to their decisions. We postulate that deep models can do this as well and propose our Pointing and Justification (PJ-X) model which can justify its decision with a sentence and point to the evidence by introspecting its decision and explanation process using an attention mechanism. Unfortunately there is no dataset available with reference explanations for visual decision making. We thus collect two datasets in two domains where it is interesting and challenging to explain decisions. First, we extend the visual question answering task to not only provide an answer but also a natural language explanation for the answer. Second, we focus on explaining human activities which is traditionally more challenging than object classification. We extensively evaluate our PJ-X model, both on the justification and pointing tasks, by comparing it to prior models and ablations using both automatic and human evaluations.

Details

show
hide
Language(s): eng - English
 Dates: 2016-12-142016
 Publication Status: Published online
 Pages: 17 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 1612.04757
URI: http://arxiv.org/abs/1612.04757
BibTex Citekey: Park2017
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show