English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Multi-view Priors for Learning Detectors from Sparse Viewpoint Data

Pepik, B., Stark, M., Gehler, P., & Schiele, B. (2014). Multi-view Priors for Learning Detectors from Sparse Viewpoint Data. In International Conference on Learning Representations 2014 (pp. 1-13). Ithaca, NY: Cornell University. Retrieved from http://arxiv.org/abs/1312.6095.

Item is

Files

show Files
hide Files
:
arXiv:1312.6095.pdf (Preprint), 4KB
Name:
arXiv:1312.6095.pdf
Description:
File downloaded from arXiv at 2014-05-22 11:57
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Pepik, Bojan1, Author           
Stark, Michael1, Author           
Gehler, Peter2, Author           
Schiele, Bernt1, Author           
Affiliations:
1Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society, ou_1116547              
2External Organizations, ou_persistent22              

Content

show
hide
Free keywords: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Abstract: While the majority of today's object class models provide only 2D bounding boxes, far richer output hypotheses are desirable including viewpoint, fine-grained category, and 3D geometry estimate. However, models trained to provide richer output require larger amounts of training data, preferably well covering the relevant aspects such as viewpoint and fine-grained categories. In this paper, we address this issue from the perspective of transfer learning, and design an object class model that explicitly leverages correlations between visual features. Specifically, our model represents prior distributions over permissible multi-view detectors in a parametric way -- the priors are learned once from training data of a source object class, and can later be used to facilitate the learning of a detector for a target class. As we show in our experiments, this transfer is not only beneficial for detectors based on basic-level category representations, but also enables the robust learning of detectors that represent classes at finer levels of granularity, where training data is typically even scarcer and more unbalanced. As a result, we report largely improved performance in simultaneous 2D object localization and viewpoint estimation on a recent dataset of challenging street scenes.

Details

show
hide
Language(s): eng - English
 Dates: 2013-12-202014-02-162014
 Publication Status: Published online
 Pages: 13 p., 7 figures, 4 tables, International Conference on Learning Representations 2014
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 1312.6095
BibTex Citekey: 844
URI: http://arxiv.org/abs/1312.6095
 Degree: -

Event

show
hide
Title: International Conference on Learning Representations 2014
Place of Event: Banff, Canada
Start-/End Date: 2014-04-14 - 2014-04-16

Legal Case

show

Project information

show

Source 1

show
hide
Title: International Conference on Learning Representations 2014
  Abbreviation : ICLR 2014
Source Genre: Proceedings
 Creator(s):
Affiliations:
Publ. Info: Ithaca, NY : Cornell University
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: 1 - 13 Identifier: -