非表示:
キーワード:
Computer Science, Computer Vision and Pattern Recognition, cs.CV
要旨:
Many compelling video post-processing effects, in particular aesthetic focus
editing and refocusing effects, are feasible if per-frame depth information is
available. Existing computational methods to capture RGB and depth either
purposefully modify the optics (coded aperture, light-field imaging), or employ
active RGB-D cameras. Since these methods are less practical for users with
normal cameras, we present an algorithm to capture all-in-focus RGB-D video of
dynamic scenes with an unmodified commodity video camera. Our algorithm turns
the often unwanted defocus blur into a valuable signal. The input to our method
is a video in which the focus plane is continuously moving back and forth
during capture, and thus defocus blur is provoked and strongly visible. This
can be achieved by manually turning the focus ring of the lens during
recording. The core algorithmic ingredient is a new video-based
depth-from-defocus algorithm that computes space-time-coherent depth maps,
deblurred all-in-focus video, and the focus distance for each frame. We
extensively evaluate our approach, and show that it enables compelling video
post-processing effects, such as different types of refocusing.