Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Meeting Abstract

Reinforcement Learning by Relative Entropy Policy Search

MPG-Autoren
/persons/resource/persons84135

Peters,  J
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84097

Mülling,  K
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83782

Altun,  Y
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Peters, J., Mülling, K., & Altun, Y. (2010). Reinforcement Learning by Relative Entropy Policy Search. In 30th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2010) (pp. 69).


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-BF56-A
Zusammenfassung
Policy search is a successful approach to reinforcement learning. However, policy
improvements often result in the loss of information. Hence, it has been marred by
premature convergence and implausible solutions. As first suggested in the context of
covariant policy gradients, many of these problems may be addressed by constraining
the information loss. In this book chapter, we continue this path of reasoning and suggest
the Relative Entropy Policy Search (REPS) method. The resulting method differs
significantly from previous policy gradient approaches and yields an exact update step.
It works well on typical reinforcement learning benchmark problems. We will also
present a real-world applications where a robot employs REPS to learn how to return balls in a game of table tennis.