Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

From Motor Learning to Interaction Learning in Robots

MPG-Autoren
/persons/resource/persons84135

Peters,  J
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Sigaud, O., & Peters, J. (2009). From Motor Learning to Interaction Learning in Robots. In 7ème Journées Nationales de la Recherche en Robotique (JNRR 2009) (pp. 189-195).


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-C216-5
Zusammenfassung
The number of advanced robot systems has been increasing in recent years yielding a
large variety of versatile designs with many degrees of freedom. These robots have the potential of
being applicable in uncertain tasks outside well-structured industrial settings. However, the complexity
of both systems and tasks is often beyond the reach of classical robot programming methods. As a
result, a more autonomous solution for robot task acquisition is needed where robots adaptively adjust
their behaviour to the encountered situations and required tasks.
Learning approaches pose one of the most appealing ways to achieve this goal. However, while
learning approaches are of high importance for robotics, we cannot simply use off-the-shelf methods
from the machine learning community as these usually do not scale into the domains of robotics due
to excessive computational cost as well as a lack of scalability. Instead, domain appropriate approaches
are needed. We focus here on several core domains of robot learning. For accurate task execution,
we need motor learning capabilities. For fast learning of the motor tasks, imitation learning offers the
most promising approach. Self improvement requires reinforcement learning approaches that scale into
the domain of complex robots. Finally, for efficient interaction of humans with robot systems, we will
need a form of interaction learning. This contribution provides a general introduction to these issues
and briefly presents the contributions of the related book chapters to the corresponding research topics.