Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

 
 
DownloadE-Mail
  EgoCap: Egocentric Marker-less Motion Capture with Two Fisheye Cameras (Extended Abstract)

Rhodin, H., Richardt, C., Casas, D., Insafutdinov, E., Shafiei, M., Seidel, H.-P., et al. (2016). EgoCap: Egocentric Marker-less Motion Capture with Two Fisheye Cameras (Extended Abstract). Retrieved from http://arxiv.org/abs/1701.00142.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Forschungspapier

Dateien

einblenden: Dateien
ausblenden: Dateien
:
arXiv:1701.00142.pdf (Preprint), 3MB
Name:
arXiv:1701.00142.pdf
Beschreibung:
File downloaded from arXiv at 2018-01-29 08:38 Short version of a SIGGRAPH Asia 2016 paper arXiv:1609.07306, presented at EPIC@ECCV16
OA-Status:
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Rhodin, Helge1, Autor           
Richardt, Christian1, Autor           
Casas, Dan1, Autor           
Insafutdinov, Eldar2, Autor           
Shafiei, Mohammad1, Autor           
Seidel, Hans-Peter1, Autor           
Schiele, Bernt2, Autor           
Theobalt, Christian1, Autor           
Affiliations:
1Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              
2Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society, ou_1116547              

Inhalt

einblenden:
ausblenden:
Schlagwörter: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Zusammenfassung: Marker-based and marker-less optical skeletal motion-capture methods use an outside-in arrangement of cameras placed around a scene, with viewpoints converging on the center. They often create discomfort by possibly needed marker suits, and their recording volume is severely restricted and often constrained to indoor scenes with controlled backgrounds. We therefore propose a new method for real-time, marker-less and egocentric motion capture which estimates the full-body skeleton pose from a lightweight stereo pair of fisheye cameras that are attached to a helmet or virtual-reality headset. It combines the strength of a new generative pose estimation framework for fisheye views with a ConvNet-based body-part detector trained on a new automatically annotated and augmented dataset. Our inside-in method captures full-body motion in general indoor and outdoor scenes, and also crowded scenes.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2016-12-312016
 Publikationsstatus: Online veröffentlicht
 Seiten: 4 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: arXiv: 1701.00142
URI: http://arxiv.org/abs/1701.00142
BibTex Citekey: DBLP:journals/corr/RhodinRCISSST17
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: