Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

 
 
DownloadE-Mail
  BundleFusion: Real-time Globally Consistent 3D Reconstruction using On-the-fly Surface Re-integration

Dai, A., Nießner, M., Zollhöfer, M., Izadi, S., & Theobalt, C. (2016). BundleFusion: Real-time Globally Consistent 3D Reconstruction using On-the-fly Surface Re-integration. Retrieved from http://arxiv.org/abs/1604.01093.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Forschungspapier
Latex : {BundleFusion}: {R}eal-time Globally Consistent {3D} Reconstruction using On-the-fly Surface Re-integration

Dateien

einblenden: Dateien
ausblenden: Dateien
:
arXiv:1604.01093.pdf (Preprint), 5MB
Name:
arXiv:1604.01093.pdf
Beschreibung:
File downloaded from arXiv at 2016-10-13 10:32
OA-Status:
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Dai, Angela1, Autor
Nießner, Matthias1, Autor           
Zollhöfer, Michael2, Autor           
Izadi, Shahram1, Autor
Theobalt, Christian2, Autor                 
Affiliations:
1External Organizations, ou_persistent22              
2Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              

Inhalt

einblenden:
ausblenden:
Schlagwörter: Computer Science, Graphics, cs.GR,Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Zusammenfassung: Real-time, high-quality, 3D scanning of large-scale scenes is key to mixed reality and robotic applications. However, scalability brings challenges of drift in pose estimation, introducing significant errors in the accumulated model. Approaches often require hours of offline processing to globally correct model errors. Recent online methods demonstrate compelling results, but suffer from: (1) needing minutes to perform online correction preventing true real-time use; (2) brittle frame-to-frame (or frame-to-model) pose estimation resulting in many tracking failures; or (3) supporting only unstructured point-based representations, which limit scan quality and applicability. We systematically address these issues with a novel, real-time, end-to-end reconstruction framework. At its core is a robust pose estimation strategy, optimizing per frame for a global set of camera poses by considering the complete history of RGB-D input with an efficient hierarchical approach. We remove the heavy reliance on temporal tracking, and continually localize to the globally optimized frames instead. We contribute a parallelizable optimization framework, which employs correspondences based on sparse features and dense geometric and photometric matching. Our approach estimates globally optimized (i.e., bundle adjusted) poses in real-time, supports robust tracking with recovery from gross tracking failures (i.e., relocalization), and re-estimates the 3D model in real-time to ensure global consistency; all within a single framework. Our approach outperforms state-of-the-art online systems with quality on par to offline methods, but with unprecedented speed and scan completeness. Our framework leads to a comprehensive online scanning solution for large indoor environments, enabling ease of use and high-quality results.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2016-04-042016
 Publikationsstatus: Online veröffentlicht
 Seiten: 17 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: arXiv: 1604.01093
URI: http://arxiv.org/abs/1604.01093
BibTex Citekey: DaiarXiv1604.01093
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: