Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

 
 
DownloadE-Mail
  Classifying faces by sex is more accurate with 3D shape information than with texture

O'Toole, A., Vetter, T., Troje, N., & Bülthoff, H. (1996). Classifying faces by sex is more accurate with 3D shape information than with texture. Poster presented at Annual Meeting of the Association for Research in Vision and Ophthalmology 1996, Fort Lauderdale, FL, USA.

Item is

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
O'Toole, AJ1, Autor           
Vetter, T1, Autor           
Troje, NF1, Autor           
Bülthoff, HH1, Autor           
Affiliations:
1Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: Purpose: We compared quality of information available in 3D surface models versus texture maps for classifying human faces by sex. Methods: 3D surface models and texture maps from laser scans of 130 human heads (65 male, 65 female) were analyzed with separate principal components analyses (PCAs). Individual principal components (PCs) from the 3D head data characterized complex structural differences between male and female heads. Likewise, individual PCs in the texture analysis contrasted characteristically male vs. female texture patterns (e.g., presence/absence of facial hair shadowing). More formally, representing faces with only their projection coefficients onto the PCs, and varying the subspace from 1 to 50 dimensions, we trained a series of perceptrons to predict the sex of the faces using either the 3D or texture data. A "leave-one-out" technique was applied to measure the gen-eralizability of the perceptron's sex predictions. Results: While very good sex generalization performance was obtained for both representations, even with very low dimensional subspaces (e.g., 76.1 correct with only one 3D projection coefficient), the 3D data supported more accurate sex classification across nearly the entire range of subspaces tested. For texture, 93.8 correct sex generalization was achieved with a minimun subspace of 20 projection coefficients. For 3D data, 96.9 correct generalization was achieved with 17 projection coefficients. Conclusions: These data highlight the importance of considering the kinds of information available in different face representations with respect to the task demands.

Details

einblenden:
ausblenden:
Sprache(n):
 Datum: 1996-04
 Publikationsstatus: Erschienen
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: BibTex Citekey: 583
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: Annual Meeting of the Association for Research in Vision and Ophthalmology 1996
Veranstaltungsort: Fort Lauderdale, FL, USA
Start-/Enddatum: -

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: