English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Exploiting Mutual Camera Visibility in Multi-camera Motion Estimation

MPS-Authors
/persons/resource/persons44872

Kurz,  Christian
Computer Graphics, MPI for Informatics, Max Planck Society;
International Max Planck Research School, MPI for Informatics, Max Planck Society;

/persons/resource/persons45618

Thormählen,  Thorsten
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45312

Rosenhahn,  Bodo
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45449

Seidel,  Hans-Peter       
Computer Graphics, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Kurz, C., Thormählen, T., Rosenhahn, B., & Seidel, H.-P. (2009). Exploiting Mutual Camera Visibility in Multi-camera Motion Estimation. In G. Bebis, R. D. Boyle, B. Parvin, D. Koracin, Y. Kuno, J. Wang, et al. (Eds.), 5th International Symposium on Visual Computing (ISVC09) (pp. 391-402). Berlin: Springer.


Cite as: https://hdl.handle.net/11858/00-001M-0000-000F-19A9-7
Abstract
This paper addresses the estimation of camera motion and 3D reconstruction from image sequences for multiple independently moving cameras. If multiple moving cameras record the same scene, a camera is often visible in another camera's field of view. This poses a constraint on the position of the observed camera, which can be included into the conjoined optimization process. The paper contains the following contributions: Firstly, a fully automatic detection and tracking algorithm for the position of a moving camera in the image sequence of another moving camera is presented. Secondly, a sparse bundle adjustment algorithm is introduced, which includes this additional constraint on the position of the tracked camera. Since the additional constraints minimize the geometric error at the boundary of the reconstructed volume, the total reconstruction accuracy can be improved significantly. Experiments with synthetic and challenging real world scenes show the improved performance of our fully automatic method.