Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

 
 
DownloadE-Mail
  Learning Visual Representations for Perception-Action Systems

Piater, J., Jodogne, S., Detry, R., Kraft, R., Krüger, N., Kroemer, O., et al. (2011). Learning Visual Representations for Perception-Action Systems. The International Journal of Robotics Research, 30(3), 294-307. doi:10.1177/0278364910382464.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Zeitschriftenartikel

Externe Referenzen

einblenden:
ausblenden:
Beschreibung:
-
OA-Status:

Urheber

einblenden:
ausblenden:
 Urheber:
Piater, J, Autor           
Jodogne, S, Autor
Detry, R, Autor
Kraft, R, Autor
Krüger, N, Autor
Kroemer, O.1, 2, Autor           
Peters, J1, 2, Autor           
Affiliations:
1Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497795              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: We discuss vision as a sensory modality for systems that interact flexibly with uncontrolled environments. Instead of trying to build a generic vision system that produces task-independent representations, we argue in favor of task-specific, learnable representations. This concept is illustrated by two examples of our own work. First, our RLVC algorithm performs reinforcement learning directly on the visual input space. To make this very large space manageable, RLVC interleaves the reinforcement learner with a supervised classification algorithm that seeks to split perceptual states so as to reduce perceptual aliasing. This results in an adaptive discretization of the perceptual space based on the presence or absence of visual features. Its extension, RLJC, additionally handles continuous action spaces. In contrast to the minimalistic visual representations produced by RLVC and RLJC, our second method learns structural object models for robust object detection and pose estimation by probabilistic inference. To these models, the method associates grasp experiences autonomously learned by trial and error. These experiences form a non-parametric representation of grasp success likelihoods over gripper poses, which we call a grasp density. Thus, object detection in a novel scene simultaneously produces suitable grasping options.

Details

einblenden:
ausblenden:
Sprache(n):
 Datum: 2011-02
 Publikationsstatus: Erschienen
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: DOI: 10.1177/0278364910382464
BibTex Citekey: 6707
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

einblenden:
ausblenden:
Titel: The International Journal of Robotics Research
Genre der Quelle: Zeitschrift
 Urheber:
Affiliations:
Ort, Verlag, Ausgabe: Cambridge, MA : Sage Publications, Inc.
Seiten: - Band / Heft: 30 (3) Artikelnummer: - Start- / Endseite: 294 - 307 Identifikator: ISSN: 0278-3649
CoNE: https://pure.mpg.de/cone/journals/resource/954925506289