Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

 
 
DownloadE-Mail
  Generation and Grounding of Natural Language Descriptions for Visual Data

Rohrbach, A. (2017). Generation and Grounding of Natural Language Descriptions for Visual Data. PhD Thesis, universität des Saarlandes, Saarbrücken.

Item is

Externe Referenzen

einblenden:
ausblenden:
externe Referenz:
http://scidok.sulb.uni-saarland.de/volltexte/2017/6874/ (beliebiger Volltext)
Beschreibung:
-
OA-Status:
Grün
Beschreibung:
-
OA-Status:
Keine Angabe

Urheber

einblenden:
ausblenden:
 Urheber:
Rohrbach, Anna1, 2, Autor           
Schiele, Bernt1, Ratgeber           
Demberg, Vera3, Gutachter
Darrell, Trevor3, Gutachter
Affiliations:
1Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society, ou_1116547              
2International Max Planck Research School, MPI for Informatics, Max Planck Society, ou_1116551              
3External Organizations, ou_persistent22              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: Generating natural language descriptions for visual data links computer vision and computational linguistics. Being able to generate a concise and human-readable description of a video is a step towards visual understanding. At the same time, grounding natural language in visual data provides disambiguation for the linguistic concepts, necessary for many applications. This thesis focuses on both directions and tackles three specific problems. First, we develop recognition approaches to understand video of complex cooking activities. We propose an approach to generate coherent multi-sentence descriptions for our videos. Furthermore, we tackle the new task of describing videos at variable level of detail. Second, we present a large-scale dataset of movies and aligned professional descriptions. We propose an approach, which learns from videos and sentences to describe movie clips relying on robust recognition of visual semantic concepts. Third, we propose an approach to ground textual phrases in images with little or no localization supervision, which we further improve by introducing Multimodal Compact Bilinear Pooling for combining language and vision representations. Finally, we jointly address the task of describing videos and grounding the described people. To summarize, this thesis advances the state-of-the-art in automatic video description and visual grounding and also contributes large datasets for studying the intersection of computer vision and computational linguistics.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2017-05-162017-06-02
 Publikationsstatus: Online veröffentlicht
 Seiten: X, 215 p.
 Ort, Verlag, Ausgabe: Saarbrücken : universität des Saarlandes
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: BibTex Citekey: Rohrbachphd17
 Art des Abschluß: Doktorarbeit

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: