de.mpg.escidoc.pubman.appbase.FacesBean
English
 
Help Guide Disclaimer Contact us Login
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

High-Quality Interactive Lumigraph Rendering Through Warping

MPS-Authors
http://pubman.mpdl.mpg.de/cone/persons/resource/persons45390

Schirmacher,  Hartmut
Computer Graphics, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons44602

Heidrich,  Wolfgang
Computer Graphics, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons45449

Seidel,  Hans-Peter
Computer Graphics, MPI for Informatics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Schirmacher, H., Heidrich, W., & Seidel, H.-P. (2000). High-Quality Interactive Lumigraph Rendering Through Warping. In S. Fels, & P. Poulin (Eds.), Proceedings of Graphics Interface 2000 (GI-00) (pp. 87-94). Toronto, Canada: Canadian Information Processing Society.


Cite as: http://hdl.handle.net/11858/00-001M-0000-000F-34C3-B
Abstract
We introduce an algorithm for high-quality, interactive light field rendering from only a small number of input images. The algorithm bridges the gap between image warping and interpolation from image databases, which represent the two major approaches in image based rendering. By warping and blending only necessary parts of the Lumigraph images, we are able to generate a single view-corrected texture for every output frame at interactive rates. In contrast to previous light field rendering approaches, our warping-based algorithm is able to fully exploit per-pixel depth information in order to depth-correct the light field samples with maximum accuracy. The complexity of the proposed algorithm is independent of the number of stored reference images and of the final screen resolution. It performs with only small overhead and very few visible artifacts. We demonstrate the visual fidelity as well as the performance of our method through various examples.