de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Datenschutzhinweis Impressum Kontakt
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

High-Quality Interactive Lumigraph Rendering Through Warping

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons45390

Schirmacher,  Hartmut
Computer Graphics, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons44602

Heidrich,  Wolfgang
Computer Graphics, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons45449

Seidel,  Hans-Peter
Computer Graphics, MPI for Informatics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Schirmacher, H., Heidrich, W., & Seidel, H.-P. (2000). High-Quality Interactive Lumigraph Rendering Through Warping. In S. Fels, & P. Poulin (Eds.), Proceedings of Graphics Interface 2000 (GI-00) (pp. 87-94). Toronto, Canada: Canadian Information Processing Society.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-000F-34C3-B
Zusammenfassung
We introduce an algorithm for high-quality, interactive light field rendering from only a small number of input images. The algorithm bridges the gap between image warping and interpolation from image databases, which represent the two major approaches in image based rendering. By warping and blending only necessary parts of the Lumigraph images, we are able to generate a single view-corrected texture for every output frame at interactive rates. In contrast to previous light field rendering approaches, our warping-based algorithm is able to fully exploit per-pixel depth information in order to depth-correct the light field samples with maximum accuracy. The complexity of the proposed algorithm is independent of the number of stored reference images and of the final screen resolution. It performs with only small overhead and very few visible artifacts. We demonstrate the visual fidelity as well as the performance of our method through various examples.