English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Report

How Not to Be Seen -- Inpainting Dynamic Objects in Crowded Scenes

MPS-Authors
/persons/resource/persons44518

Granados,  Miguel
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons44776

Kim,  Kwang
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian       
Computer Graphics, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

MPI-I-2011-4-001.pdf
(Any fulltext), 14MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Granados, M., Tompkin, J., Kim, K., Grau, O., Kautz, J., & Theobalt, C.(2011). How Not to Be Seen -- Inpainting Dynamic Objects in Crowded Scenes (MPI-I-2011-4-001). Saarbrücken: MPI für Informatik.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0010-13C5-3
Abstract
Removing dynamic objects from videos is an extremely challenging problem that even visual effects professionals often solve with time-consuming manual frame-by-frame editing. We propose a new approach to video completion that can deal with complex scenes containing dynamic background and non-periodical moving objects. We build upon the idea that the spatio-temporal hole left by a removed object can be filled with data available on other regions of the video where the occluded objects were visible. Video completion is performed by solving a large combinatorial problem that searches for an optimal pattern of pixel offsets from occluded to unoccluded regions. Our contribution includes an energy functional that generalizes well over different scenes with stable parameters, and that has the desirable convergence properties for a graph-cut-based optimization. We provide an interface to guide the completion process that both reduces computation time and allows for efficient correction of small errors in the result. We demonstrate that our approach can effectively complete complex, high-resolution occlusions that are greater in difficulty than what existing methods have shown.