Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Meeting Abstract

Metamers of the ventral stream revisited

MPG-Autoren
/persons/resource/persons83805

Bethge,  M
Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84314

Wichmann,  F
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Dept. Empirical Inference, Max Planck Institute for Intelligent Systems, Max Planck Society;

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Wallis, T., Bethge, M., & Wichmann, F. (2015). Metamers of the ventral stream revisited.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-002A-44E5-8
Zusammenfassung
Peripheral vision has been characterised as a lossy representation: information present in the periphery is discarded to a greater degree than in the fovea. What information is lost and what is retained? Freeman and Simoncelli (2011) recently revived the concept of metamers (physically different stimuli that look the same) as a way to test this question. Metamerism is a useful criterion, but several details must be refined. First, their paper assessed metamerism using a task with a significant working memory component (ABX). We use a purely spatial discrimination task to probe perceptual encoding. Second, a strong test of any hypothesised representation is to what extent it is metameric for a real scene. Several subsequent studies have misunderstood this to be the result of the paper. Freeman and Simoncelli instead only compared synthetic stimuli to each other. Pairs of stimuli were synthesised from natural images such that they were physically different but equal under the model representation. The experiment then assessed the scaling factor (spatial pooling region as a function of retinal eccentricity) required to make these two synthesised images indiscriminable from one another, finding that these scaling factors approximated V2 receptive field sizes. We find that a smaller scale factor than V2 neurons is required to make the synthesised images metameric for natural scenes (which are also equal under the model). We further show that this varies over images and is modified by including the spatial context of the target patches. While this particular model therefore fails to capture some perceptually relevant information, we believe that testing specific models against the criteria that they should discard as much information as possible while remaining metameric is a useful way to understand perceptual representations psychophysically.