Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

 
 
DownloadE-Mail
  Physiologially Plausible Neuronal Model for Prototype-Referenced Encoding of Faces

Sigala A., G., Leopold, D., Wallraven, C., & Giese, M. (2004). Physiologially Plausible Neuronal Model for Prototype-Referenced Encoding of Faces. Poster presented at 7th Tübingen Perception Conference (TWK 2004), Tübingen, Germany.

Item is

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Sigala A., GR1, Autor           
Leopold, D1, Autor           
Wallraven, C2, Autor           
Giese, MA3, Autor           
Affiliations:
1Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497798              
2Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              
3Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497794              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: Conceptual models of face recognition have assumed that faces are encoded as points of an abstract face space relative to an average face, or face prototype (e.g. [1]). So far it has been largely unclear how such a prototype-referenced encoding of faces could be implemented with real neurons. Recent electrophysiological evidence seems to support the relevance of prototype-referenced encoding. Neurons in macque inferotemporal cortex, which have been trained with human faces, tend to show a monotonic tuning with the caricature level of the stimuli [2]. We present a neural model that accounts for these new electrophysiological results. The hierarchical model consists of multiple layers of neural detectors modeling properties of neurons in the dorsal visual processing stream. The rst layer models simple cells using Gabor lters with with physiologically realistic parameters. A second layer combines responses of Gabor lters that carry signicant information about a training stimuli into more complex features. The complex features in the model are based on the Principal Components of the Gabor responses, which could be extracted using simple Hebbian-like learning rules. The highest hierarchy layer models neurons in area IT. The responses of these neural detectors increase monotonically with the distance of the input feature vector, from the previous layer, and the average feature vector over all training faces. In addition, neural detectors on the highest hierarchy level show a broad tuning with resepect to the direction of the difference vector between input feature vector and this average vector. The model was tested with gray-level images that were generated using a morphable 3D face model [3]. The model was trained with 98 randomly chosen faces from a data basis with 200 faces. It was tested with caricatures and anti-caricatures of 4 selected faces. In addition we tested lateral caricatures of the faces, which lie on curves in face space that connect the four selected example faces. Exactly the same stimuli had been used in the electrophysiological experiments [2]. After training, a signicant number of the neural units on the highest level of the model show a monotonic tuning with the caricature level of the faces, and a moderate tuning with respect to facial identity, consistent with the electriophysiological results. The model provides a physiologically plausible concrete neural implementation of face spaces. Future work will explore its computational properties and coding efciency in comparison with classical neural models for face recognition.

Details

einblenden:
ausblenden:
Sprache(n):
 Datum: 2004-02
 Publikationsstatus: Erschienen
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: URI: http://www.twk.tuebingen.mpg.de/twk04/index.php
BibTex Citekey: 5547
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: 7th Tübingen Perception Conference (TWK 2004)
Veranstaltungsort: Tübingen, Germany
Start-/Enddatum: -

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: