English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

DeeperCut: A Deeper, Stronger, and Faster Multi-Person Pose Estimation Model

MPS-Authors
/persons/resource/persons188120

Insafutdinov,  Eldar
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

/persons/resource/persons45196

Pishchulin,  Leonid
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

/persons/resource/persons98382

Andres,  Bjoern
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

/persons/resource/persons45383

Schiele,  Bernt       
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Insafutdinov, E., Pishchulin, L., Andres, B., Andriluka, M., & Schiele, B. (2016). DeeperCut: A Deeper, Stronger, and Faster Multi-Person Pose Estimation Model. In B. Leibe (Ed.), Computer Vision -- ECCV 2016 (pp. 34-50). Berlin: Springer. doi:10.1007/978-3-319-46466-4_3.


Cite as: https://hdl.handle.net/11858/00-001M-0000-002A-FCF9-C
Abstract
The goal of this paper is to advance the state-of-the-art of articulated pose estimation in scenes with multiple people. To that end we contribute on three fronts. We propose (1) improved body part detectors that generate effective bottom-up proposals for body parts; (2) novel image-conditioned pairwise terms that allow to assemble the proposals into a variable number of consistent body part configurations; and (3) an incremental optimization strategy that explores the search space more efficiently thus leading both to better performance and significant speed-up factors. We evaluate our approach on two single-person and two multi-person pose estimation benchmarks. The proposed approach significantly outperforms best known multi-person pose estimation results while demonstrating competitive performance on the task of single person pose estimation. Models and code available at http://pose.mpi-inf.mpg.de