English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

High-Quality Interactive Lumigraph Rendering Through Warping

MPS-Authors
/persons/resource/persons45390

Schirmacher,  Hartmut
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons44602

Heidrich,  Wolfgang
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45449

Seidel,  Hans-Peter       
Computer Graphics, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Schirmacher, H., Heidrich, W., & Seidel, H.-P. (2000). High-Quality Interactive Lumigraph Rendering Through Warping. In S. Fels, & P. Poulin (Eds.), Proceedings of the Graphics Interface 2000 Conference (pp. 87-94). Toronto, Canada: Canadian Information Processing Society.


Cite as: https://hdl.handle.net/11858/00-001M-0000-000F-34C3-B
Abstract
We introduce an algorithm for high-quality, interactive light field
rendering from only a small number of input images.

The algorithm bridges the gap between image warping and
interpolation from image databases, which represent the two major
approaches in image based rendering. By warping and blending only
necessary parts of the Lumigraph images, we are able to generate a
single view-corrected texture for every output frame at interactive
rates.

In contrast to previous light field rendering approaches, our
warping-based algorithm is able to fully exploit per-pixel depth
information in order to depth-correct the light field samples with
maximum accuracy.

The complexity of the proposed algorithm is independent of the
number of stored reference images and of the final screen
resolution. It performs with only small overhead and very few
visible artifacts. We demonstrate the visual fidelity as well as the
performance of our method through various examples.