hide
Free keywords:
-
Abstract:
Computational or information-processing theories of vision describe object recognition in terms of a comparison between the input image and a set of stored models that represent known objects. The nature of these representations is reflected in the performance of the visual system and may be studied experimentally, by presenting subjects with computer graphics simulations of three-dimensional objects (with precisely controlled shape cues), and by analyzing the ensuing patterns of response time and error rate. We discuss a series of psychophysical experiments that explore different aspects of the problem of subordinate-level object recognition and representation in human vision. Contrary to the paradigmatic view which holds that the representations are three-dimensional and object-centered, the results consistently support the notion of view-specific representations that include at most partial depth information. In simulated experiments that involved the same stimuli that were shown to the human subjects, computational models built around two-dimensional multiple-view representations replicated psychophysical results concerning the observed pattern of generalization errors. We argue that extensions of the multiple-view theory based on the notion of a hierarchy of spatial and nonspatial features could lead to a unification of theoretical accounts of a wide range of phenomena in human object recognition.