de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Datenschutzhinweis Impressum Kontakt
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

Optimal Reinforcement Learning for Gaussian Systems

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons84387

Hennig,  P
Dept. Empirical Inference, Max Planck Institute for Intelligent Systems, Max Planck Society;

Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Hennig, P. (2012). Optimal Reinforcement Learning for Gaussian Systems. In J. Shawe-Taylor, R. Zemel, P. Bartlett, F. Pereira, & K. Weinberger (Eds.), Advances in Neural Information Processing Systems 24 (pp. 325-333). Red Hook, NY, USA: Curran.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-B882-2
Zusammenfassung
The exploration-exploitation trade-off is among the central challenges of reinforcement learning. The optimal Bayesian solution is intractable in general. This paper studies to what extent analytic statements about optimal learning are possible if all beliefs are Gaussian processes. A first order approximation of learning of both loss and dynamics, for nonlinear, time-varying systems in continuous time and space, subject to a relatively weak restriction on the dynamics, is described by an infinite-dimensional partial differential equation. An approximate finitedimensional projection gives an impression for how this result may be helpful.