日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

会議論文

On Feature Combination for Multiclass Object Classification

MPS-Authors
/persons/resource/persons44483

Gehler,  PV
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84113

Nowozin,  S
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)
公開されているフルテキストはありません
付随資料 (公開)
There is no public supplementary material available
引用

Gehler, P., & Nowozin, S. (2009). On Feature Combination for Multiclass Object Classification. In 2009 IEEE 12th International Conference on Computer Vision (pp. 221-228). Piscataway, NJ, USA: IEEE Computer Society.


引用: https://hdl.handle.net/11858/00-001M-0000-0013-C286-B
要旨
A key ingredient in the design of visual object classification
systems is the identification of relevant class specific
aspects while being robust to intra-class variations. While
this is a necessity in order to generalize beyond a given set
of training images, it is also a very difficult problem due to
the high variability of visual appearance within each class.
In the last years substantial performance gains on challenging
benchmark datasets have been reported in the literature.
This progress can be attributed to two developments: the
design of highly discriminative and robust image features
and the combination of multiple complementary features
based on different aspects such as shape, color or texture.
In this paper we study several models that aim at learning
the correct weighting of different features from training
data. These include multiple kernel learning as well as
simple baseline methods. Furthermore we derive ensemble
methods inspired by Boosting which are easily extendable to
several multiclass setting. All methods are thoroughly evaluated
on object classification datasets using a multitude of
feature descriptors. The key results are that even very simple
baseline methods, that are orders of magnitude faster
than learning techniques are highly competitive with multiple
kernel learning. Furthermore the Boosting type methods
are found to produce consistently better results in all experiments.
We provide insight of when combination methods
can be expected to work and how the benefit of complementary
features can be exploited most efficiently.