Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Forschungspapier

EgoCap: Egocentric Marker-less Motion Capture with Two Fisheye Cameras (Extended Abstract)

MPG-Autoren
/persons/resource/persons79450

Rhodin,  Helge
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45289

Richardt,  Christian
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons138534

Casas,  Dan
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons188120

Insafutdinov,  Eldar
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

/persons/resource/persons199324

Shafiei,  Mohammad
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45449

Seidel,  Hans-Peter
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45383

Schiele,  Bernt
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian
Computer Graphics, MPI for Informatics, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)

arXiv:1701.00142.pdf
(Preprint), 3MB

Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Rhodin, H., Richardt, C., Casas, D., Insafutdinov, E., Shafiei, M., Seidel, H.-P., et al. (2016). EgoCap: Egocentric Marker-less Motion Capture with Two Fisheye Cameras (Extended Abstract). Retrieved from http://arxiv.org/abs/1701.00142.


Zitierlink: https://hdl.handle.net/21.11116/0000-0000-3B3D-B
Zusammenfassung
Marker-based and marker-less optical skeletal motion-capture methods use an outside-in arrangement of cameras placed around a scene, with viewpoints converging on the center. They often create discomfort by possibly needed marker suits, and their recording volume is severely restricted and often constrained to indoor scenes with controlled backgrounds. We therefore propose a new method for real-time, marker-less and egocentric motion capture which estimates the full-body skeleton pose from a lightweight stereo pair of fisheye cameras that are attached to a helmet or virtual-reality headset. It combines the strength of a new generative pose estimation framework for fisheye views with a ConvNet-based body-part detector trained on a new automatically annotated and augmented dataset. Our inside-in method captures full-body motion in general indoor and outdoor scenes, and also crowded scenes.