English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  EgoCap: Egocentric Marker-less Motion Capture with Two Fisheye Cameras (Extended Abstract)

Rhodin, H., Richardt, C., Casas, D., Insafutdinov, E., Shafiei, M., Seidel, H.-P., et al. (2016). EgoCap: Egocentric Marker-less Motion Capture with Two Fisheye Cameras (Extended Abstract). Retrieved from http://arxiv.org/abs/1701.00142.

Item is

Files

show Files
hide Files
:
arXiv:1701.00142.pdf (Preprint), 3MB
Name:
arXiv:1701.00142.pdf
Description:
File downloaded from arXiv at 2018-01-29 08:38 Short version of a SIGGRAPH Asia 2016 paper arXiv:1609.07306, presented at EPIC@ECCV16
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Rhodin, Helge1, Author           
Richardt, Christian1, Author           
Casas, Dan1, Author           
Insafutdinov, Eldar2, Author           
Shafiei, Mohammad1, Author           
Seidel, Hans-Peter1, Author           
Schiele, Bernt2, Author           
Theobalt, Christian1, Author           
Affiliations:
1Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              
2Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society, ou_1116547              

Content

show
hide
Free keywords: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Abstract: Marker-based and marker-less optical skeletal motion-capture methods use an outside-in arrangement of cameras placed around a scene, with viewpoints converging on the center. They often create discomfort by possibly needed marker suits, and their recording volume is severely restricted and often constrained to indoor scenes with controlled backgrounds. We therefore propose a new method for real-time, marker-less and egocentric motion capture which estimates the full-body skeleton pose from a lightweight stereo pair of fisheye cameras that are attached to a helmet or virtual-reality headset. It combines the strength of a new generative pose estimation framework for fisheye views with a ConvNet-based body-part detector trained on a new automatically annotated and augmented dataset. Our inside-in method captures full-body motion in general indoor and outdoor scenes, and also crowded scenes.

Details

show
hide
Language(s): eng - English
 Dates: 2016-12-312016
 Publication Status: Published online
 Pages: 4 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 1701.00142
URI: http://arxiv.org/abs/1701.00142
BibTex Citekey: DBLP:journals/corr/RhodinRCISSST17
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show