English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Translating Video Content to Natural Language Descriptions

MPS-Authors
/persons/resource/persons45307

Rohrbach,  Marcus
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

/persons/resource/persons98388

Qiu,  Wei
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

/persons/resource/persons45383

Schiele,  Bernt       
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

External Resource

Link
(Any fulltext)

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Rohrbach, M., Qiu, W., Titov, I., Thater, S., Pinkal, M., & Schiele, B. (2013). Translating Video Content to Natural Language Descriptions. In ICCV 2013 (pp. 433-440). Los Alamitos, CA: IEEE Computer Society. doi:10.1109/ICCV.2013.61.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0017-EEF3-B
Abstract
Humans use rich natural language to describe and communicate visual perceptions. In order to provide natural language descriptions for visual content, this paper combines two important ingredients. First, we generate a rich semantic representation of the visual content including e.g. object and activity labels. To predict the semantic representation we learn a CRF to model the relationships between different components of the visual input. And second, we propose to formulate the generation of natural language as a machine translation problem using the semantic representation as source language and the generated sentences as target language. For this we exploit the power of a parallel corpus of videos and textual descriptions and adapt statistical machine translation to translate between our two languages. We evaluate our video descriptions on the TACoS dataset, which contains video snippets aligned with sentence descriptions. Using automatic evaluation and human judgments we show significant improvements over several base line approaches, motivated by prior work. Our translation approach also shows improvements over related work on an image description task.