English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Manipulating Refractive and Reflective Binocular Disparity

MPS-Authors
/persons/resource/persons101646

Dabala,  Lukasz
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons79305

Kellnhofer,  Petr
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45298

Ritschel,  Tobias
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45598

Templin,  Krzysztof
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45095

Myszkowski,  Karol
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45449

Seidel,  Hans-Peter       
Computer Graphics, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Dabala, L., Kellnhofer, P., Ritschel, T., Didyk, P., Templin, K., Rokita, P., et al. (2014). Manipulating Refractive and Reflective Binocular Disparity. Computer Graphics Forum, 33(2), 53-62. doi:10.1111/cgf.12290.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0019-EEF9-6
Abstract
Presenting stereoscopic content on 3D displays is a challenging task, usually requiring manual adjustments. A number of techniques have been developed to aid this process, but they account for binocular disparity of surfaces that are diffuse and opaque only. However, combinations of transparent as well as specular materials are common in the real and virtual worlds, and pose a significant problem. For example, excessive disparities can be created which cannot be fused by the observer. Also, multiple stereo interpretations become possible, e. g., for glass, that both reflects and refracts, which may confuse the observer and result in poor 3D experience. In this work, we propose an efficient method for analyzing and controlling disparities in computer-generated images of such scenes where surface positions and a layer decomposition are available. Instead of assuming a single per-pixel disparity value, we estimate all possibly perceived disparities at each image location. Based on this representation, we define an optimization to find the best per-pixel camera parameters, assuring that all disparities can be easily fused by a human. A preliminary perceptual study indicates, that our approach combines comfortable viewing with realistic depiction of typical specular scenes.