de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

Policy Gradients with Parameter-based Exploration for Control

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons84135

Osendorfer C, Rückstiess T, Graves A, Peters,  J
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Dept. Empirical Inference, Max Planck Institute for Intelligent Systems, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Sehnke, F., Osendorfer C, Rückstiess T, Graves A, Peters, J., & Schmidhuber, J. (2008). Policy Gradients with Parameter-based Exploration for Control. Artificial Neural Networks: ICANN 2008, 387-396.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-C751-4
Zusammenfassung
We present a model-free reinforcement learning method for partially observable Markov decision problems. Our method estimates a likelihood gradient by sampling directly in parameter space, which leads to lower variance gradient estimates than those obtained by policy gradient methods such as REINFORCE. For several complex control tasks, including robust standing with a humanoid robot, we show that our method outperforms well-known algorithms from the fields of policy gradients, finite difference methods and population based heuristics. We also provide a detailed analysis of the differences between our method and the other algorithms.