English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

HandSeg: A Dataset for Hand Segmentation from Depth Images

MPS-Authors
/persons/resource/persons134216

Mueller,  Franziska
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian
Computer Graphics, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:1711.05944.pdf
(Preprint), 10MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Malireddi, S. R., Mueller, F., Oberweger, M., Bojja, A. K., Lepetit, V., Theobalt, C., et al. (2017). HandSeg: A Dataset for Hand Segmentation from Depth Images. Retrieved from http://arxiv.org/abs/1711.05944.


Cite as: https://hdl.handle.net/21.11116/0000-0000-6132-A
Abstract
We introduce a large-scale RGBD hand segmentation dataset, with detailed and automatically generated high-quality ground-truth annotations. Existing real-world datasets are limited in quantity due to the difficulty in manually annotating ground-truth labels. By leveraging a pair of brightly colored gloves and an RGBD camera, we propose an acquisition pipeline that eases the task of annotating very large datasets with minimal human intervention. We then quantify the importance of a large annotated dataset in this domain, and compare the performance of existing datasets in the training of deep-learning architectures. Finally, we propose a novel architecture employing strided convolution/deconvolutions in place of max-pooling and unpooling layers. Our variant outperforms baseline architectures while remaining computationally efficient at inference time. Source and datasets will be made publicly available.