Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

A Representation of Complex Movement Sequences Based on Hierarchical Spatio-Temporal Correspondence for Imitation Learning in Robotics

MPG-Autoren
/persons/resource/persons83791

Bakir,  GH
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83919

Franz,  MO
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Ilg, W., Bakir, G., Franz, M., & Giese, M. (2003). A Representation of Complex Movement Sequences Based on Hierarchical Spatio-Temporal Correspondence for Imitation Learning in Robotics. Poster presented at 6. Tübinger Wahrnehmungskonferenz (TWK 2003), Tübingen, Germany.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-DD06-1
Zusammenfassung
Imitation learning of complex movements has become a popular topic in neuroscience,
as well as in robotics. A number of conceptual as well as practical problems are still
unsolved. One example is the determination of the aspects of movements which are
relevant for imitation. Problems concerning the movement representation are twofold:
(1) The movement characteristics of observed movements have to be transferred from
the perceptual level to the level of generated actions. (2) Continuous spaces of movements
with variable styles have to be approximated based on a limited number of
learned example sequences. Therefore, one has to use representation with a high generalisation
capability. We present methods for the representation of complex movement
sequences that addresses these questions in the context of the imitation learning of
writing movements using a robot arm with human-like geometry. For the transfer of
complex movements from perception to action we exploit a learning-based method that
represents complex action sequences by linear combination of prototypical examples (Ilg
and Giese, BMCV 2002). The method of hierarchical spatio-temporal morphable models
(HSTMM) decomposes action sequences automatically into movement primitives.
These primitives are modeled by linear combinations of a small number of learned
example trajectories. The learned spatio-temporal models are suitable for the analysis
and synthesis of long action sequences, which consist of movement primitives with
varying style parameters. The proposed method is illustrated by imitation learning
of complex writing movements. Human trajectories were recorded using a commercial
motion capture system (VICON). In the rst step the recorded writing sequences
are decomposed into movement primitives. These movement primitives can be analyzed
and changed in style by dening linear combinations of prototypes with dierent
linear weight combinations. Our system can imitate writing movements of dierent
actors, synthesize new writing styles and can even exaggerate the writing movements
of individual actors. Words and writing movements of the robot look very natural, and
closely match the natural styles. These preliminary results makes the proposed method
promising for further applications in learning-based robotics. In this poster we focus
on the acquisition of the movement representation (identication and segmentation of
movement primitives, generation of new writing styles by spatio-temporal morphing).
The transfer of the generated writing movements to the robot considering the given
kinematic and dynamic constraints is discussed in Bakir et al (this volume).