English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  BundleFusion: Real-time Globally Consistent 3D Reconstruction using On-the-fly Surface Re-integration

Dai, A., Nießner, M., Zollhöfer, M., Izadi, S., & Theobalt, C. (2016). BundleFusion: Real-time Globally Consistent 3D Reconstruction using On-the-fly Surface Re-integration. Retrieved from http://arxiv.org/abs/1604.01093.

Item is

Basic

show hide
Genre: Paper
Latex : {BundleFusion}: {R}eal-time Globally Consistent {3D} Reconstruction using On-the-fly Surface Re-integration

Files

show Files
hide Files
:
arXiv:1604.01093.pdf (Preprint), 5MB
Name:
arXiv:1604.01093.pdf
Description:
File downloaded from arXiv at 2016-10-13 10:32
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Dai, Angela1, Author
Nießner, Matthias1, Author           
Zollhöfer, Michael2, Author           
Izadi, Shahram1, Author
Theobalt, Christian2, Author           
Affiliations:
1External Organizations, ou_persistent22              
2Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              

Content

show
hide
Free keywords: Computer Science, Graphics, cs.GR,Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Abstract: Real-time, high-quality, 3D scanning of large-scale scenes is key to mixed reality and robotic applications. However, scalability brings challenges of drift in pose estimation, introducing significant errors in the accumulated model. Approaches often require hours of offline processing to globally correct model errors. Recent online methods demonstrate compelling results, but suffer from: (1) needing minutes to perform online correction preventing true real-time use; (2) brittle frame-to-frame (or frame-to-model) pose estimation resulting in many tracking failures; or (3) supporting only unstructured point-based representations, which limit scan quality and applicability. We systematically address these issues with a novel, real-time, end-to-end reconstruction framework. At its core is a robust pose estimation strategy, optimizing per frame for a global set of camera poses by considering the complete history of RGB-D input with an efficient hierarchical approach. We remove the heavy reliance on temporal tracking, and continually localize to the globally optimized frames instead. We contribute a parallelizable optimization framework, which employs correspondences based on sparse features and dense geometric and photometric matching. Our approach estimates globally optimized (i.e., bundle adjusted) poses in real-time, supports robust tracking with recovery from gross tracking failures (i.e., relocalization), and re-estimates the 3D model in real-time to ensure global consistency; all within a single framework. Our approach outperforms state-of-the-art online systems with quality on par to offline methods, but with unprecedented speed and scan completeness. Our framework leads to a comprehensive online scanning solution for large indoor environments, enabling ease of use and high-quality results.

Details

show
hide
Language(s): eng - English
 Dates: 2016-04-042016
 Publication Status: Published online
 Pages: 17 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 1604.01093
URI: http://arxiv.org/abs/1604.01093
BibTex Citekey: DaiarXiv1604.01093
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show