Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  A Computational Mid-Level Vision Approach For Shape-Specific Saliency Detection

Curio, C., & Engel, D. (2010). A Computational Mid-Level Vision Approach For Shape-Specific Saliency Detection. Poster presented at 10th Annual Meeting of the Vision Sciences Society (VSS 2010), Naples, FL, USA.

Item is

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Curio, C1, Autor           
Engel, D1, Autor           
Affiliations:
1Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: We present a novel computational approach to visual saliency detection in dynamic natural scenes based on shape centered image features. Mid-level features, such as medial features, have been recognized as important entities in both human object recognition and in computational vision systems [Tarr Buelthoff 1998, Kimia 2003]. [Kienzle et al 2009] have shown how image driven gaze predictors can be learned from fixations during free viewing of static natural images and result in center-surround receptive fields. Method: Our novel shape-centered vision framework provides a measure for visual saliency, and is learning free. It is based on the estimation of singularities of long ranging gradient vector flow (GVF) fields that have originally been developed for the alignment of image contours [Xu Prince 1998]. The GVF uses an optimization scheme to guarantee preservation of gradients at contours and, simultaneously, smoothness of the flow field. The specific properties are similar to filling-in processes in the human brain. Our method reveals the properties of medial-feature shape transforms and provides a mechanism to detect shape specific information, local scale, and temporal change of scale, in clutter. The approach generates a graph which encodes the shape across a scale-space for each image. Results: We have made medial-feature transforms amenable to work in cluttered environments and have demonstrated temporal stability thus providing a mechanism to track shape over time. The approach can be used to model eye tracking data in dynamic scenes. A fast implementation will provide a useful tool for predicting shape-specific saliency at interactive framerates.

Details

einblenden:
ausblenden:
Sprache(n):
 Datum: 2010-05
 Publikationsstatus: Erschienen
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: URI: http://www.journalofvision.org/content/10/7/1160.abstract
DOI: 10.1167/10.7.1160
BibTex Citekey: 6581
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: 10th Annual Meeting of the Vision Sciences Society (VSS 2010)
Veranstaltungsort: Naples, FL, USA
Start-/Enddatum: -

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: