de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Image library for unsupervised learning of depth from stereo

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons84056

Lies,  J-P
Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83805

Bethge,  M
Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Lies, J.-P., & Bethge, M. (2008). Image library for unsupervised learning of depth from stereo. Poster presented at Bernstein Symposium 2008, München, Germany.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-C6DB-D
Zusammenfassung
The visual system is able to extract depth information from the disparity of the two images on the retinae. Every system that makes use of disparity information must identify corresponding points in the two images. This correspondence problem constitutes a principal difficulty in depth from stereo and many questions are left open about how the visual system solves it. In this work, we seek to understand how depth inference can emerge from unsupervised learning of statistical regularities in binocular images. In a first step we acquire a database of training data by using virtual 3D sceneries which are rendered into stereo images from two eye-like positioned cameras. This provides us with an extensive repository of stereo images along with precise depth and disparity maps. In the future we will use this data as ground truth for a quantitative analysis and comparison of different models for depth inference.