Help Guide Disclaimer Contact us Login
  Advanced SearchBrowse





Deformation of perceived shape with multiple illumination sources


Tarr,  MJ
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Di Luca,  M
Research Group Multisensory Perception and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available

Tarr, M., Di Luca, M., & Zosh, W. (2005). Deformation of perceived shape with multiple illumination sources. Poster presented at Fifth Annual Meeting of the Vision Sciences Society (VSS 2005), Sarasota, FL, USA.

Cite as:
The human visual system can easily interpret patterns of shading and attribute them to 3D surfaces. However, perceived shape is rarely veridical. Here we introduce a new effect in which extreme deformations of shape are perceived relative to ground truth. Specifically, convex, specular surfaces illuminated by multiple point-light sources are interpreted in a manner more consistent with one light source illuminating a quite different shape. We hypothesize that the visual system is not able to correctly derive the shape of objects under multiple illuminations due to an overriding single light-source assumption. However, this assumption can be disregarded if there is sufficient evidence against it. For example, other cues to shape such as shadows or stereo disparity may provide information sufficient to support more accurate shape perception, regardless of inferences based on this assumption (although this does not mean that observers now interpret the image as arising from multiple light sources). On the other hand, even the presence of boundary contours may not be sufficient for a “correct” interpretation of the image. Along with psychophysical evidence that observers interpret multiple light-source images as the product of a single source, we developed an image-matching method that produces an image of a shape plus single illuminant that is nearly indistinguishable from the original image with multiple light sources. This method is based on local adjustments of slant in order to minimize the difference between the new image and the target. This method effectively predicts the perceived shape of multiply illuminated convex surfaces. In summary, observers appear to apply relatively simple assumptions in how they derive shape percepts from shading. Moreover, it is possible to capture these assumptions in an image-matching model that accurately predicts observer performance. Thus, our ability to derive accurate models of lighting in a scene may be severely restricted.