de.mpg.escidoc.pubman.appbase.FacesBean
English
 
Help Guide Disclaimer Contact us Login
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Face Models from Noisy 3D Cameras

MPS-Authors
http://pubman.mpdl.mpg.de/cone/persons/resource/persons83829

Breidt,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83871

Curio,  C
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Breidt, M., Bülthoff, H., & Curio, C. (2010). Face Models from Noisy 3D Cameras. In 3rd ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia (SIGGRAPH Asia 2010) (pp. 1-2). New York, NY, USA: ACM Press.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0013-BD2C-B
Abstract
Affordable 3D vision is just about to enter the mass market for consumer products such as video game consoles or TV sets. Having depth information in this context is beneficial for segmentation as well as gaining robustness against illumination effects, both of which are hard problems when dealing with color camera data in typical living room situations. Several techniques compute 3D (or rather 2.5D) depth information from camera data such as realtime stereo, time-of-flight (TOF), or real-time structured light, but all produce noisy depth data at fairly low resolutions. Not surprisingly, most applications are currently limited to basic gesture recognition using the full body. In particular, TOF cameras are a relatively new and promising technology for compact, simple and fast 2.5D depth measurements. Due to the measurement principle of measuring the flight time of infrared light as it bounces off the subject, these devices have comparatively low image resolution (176 x 144 ... 320 x 240 pixels) with a high le vel of noise present in the raw data.