de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Forschungspapier

Dense Wide-Baseline Scene Flow From Two Handheld Video Cameras

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons45289

Richardt,  Christian
Computer Graphics, MPI for Informatics, Max Planck Society;
Intel Visual Computing Institute;
University of Bath;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons127713

Kim,  Hyeongwoo
Computer Graphics, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons73102

Valgaerts,  Levi
Computer Graphics, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons45610

Theobalt,  Christian
Computer Graphics, MPI for Informatics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)

arXiv:1609.05115.pdf
(Preprint), 10MB

Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Richardt, C., Kim, H., Valgaerts, L., & Theobalt, C. (2016). Dense Wide-Baseline Scene Flow From Two Handheld Video Cameras. Retrieved from http://arxiv.org/abs/1609.05115.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-002B-9AAF-D
Zusammenfassung
We propose a new technique for computing dense scene flow from two handheld videos with wide camera baselines and different photometric properties due to different sensors or camera settings like exposure and white balance. Our technique innovates in two ways over existing methods: (1) it supports independently moving cameras, and (2) it computes dense scene flow for wide-baseline scenarios.We achieve this by combining state-of-the-art wide-baseline correspondence finding with a variational scene flow formulation. First, we compute dense, wide-baseline correspondences using DAISY descriptors for matching between cameras and over time. We then detect and replace occluded pixels in the correspondence fields using a novel edge-preserving Laplacian correspondence completion technique. We finally refine the computed correspondence fields in a variational scene flow formulation. We show dense scene flow results computed from challenging datasets with independently moving, handheld cameras of varying camera settings.