Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Forschungspapier

A Versatile Scene Model with Differentiable Visibility Applied to Generative Pose Estimation

MPG-Autoren
/persons/resource/persons79450

Rhodin,  Helge
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons79452

Robertini,  Nadia
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45289

Richardt,  Christian
Computer Graphics, MPI for Informatics, Max Planck Society;
Intel Visual Computing Institute;

/persons/resource/persons45449

Seidel,  Hans-Peter       
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian       
Computer Graphics, MPI for Informatics, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)

arXiv:1602.03725.pdf
(Preprint), 4MB

Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Rhodin, H., Robertini, N., Richardt, C., Seidel, H.-P., & Theobalt, C. (2016). A Versatile Scene Model with Differentiable Visibility Applied to Generative Pose Estimation. Retrieved from http://arxiv.org/abs/1602.03725.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-002B-9875-C
Zusammenfassung
Generative reconstruction methods compute the 3D configuration (such as pose and/or geometry) of a shape by optimizing the overlap of the projected 3D shape model with images. Proper handling of occlusions is a big challenge, since the visibility function that indicates if a surface point is seen from a camera can often not be formulated in closed form, and is in general discrete and non-differentiable at occlusion boundaries. We present a new scene representation that enables an analytically differentiable closed-form formulation of surface visibility. In contrast to previous methods, this yields smooth, analytically differentiable, and efficient to optimize pose similarity energies with rigorous occlusion handling, fewer local minima, and experimentally verified improved convergence of numerical optimization. The underlying idea is a new image formation model that represents opaque objects by a translucent medium with a smooth Gaussian density distribution which turns visibility into a smooth phenomenon. We demonstrate the advantages of our versatile scene model in several generative pose estimation problems, namely marker-less multi-object pose estimation, marker-less human motion capture with few cameras, and image-based 3D geometry estimation.