de.mpg.escidoc.pubman.appbase.FacesBean
English
 
Help Guide Disclaimer Contact us Login
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Optimal Reinforcement Learning for Gaussian Systems

MPS-Authors
http://pubman.mpdl.mpg.de/cone/persons/resource/persons84387

Hennig,  P
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Hennig, P. (2012). Optimal Reinforcement Learning for Gaussian Systems. In Advances in Neural Information Processing Systems 24 (pp. 325-333). Red Hook, NY, USA: Curran.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0013-B882-2
Abstract
The exploration-exploitation trade-off is among the central challenges of reinforcement learning. The optimal Bayesian solution is intractable in general. This paper studies to what extent analytic statements about optimal learning are possible if all beliefs are Gaussian processes. A first order approximation of learning of both loss and dynamics, for nonlinear, time-varying systems in continuous time and space, subject to a relatively weak restriction on the dynamics, is described by an infinite-dimensional partial differential equation. An approximate finitedimensional projection gives an impression for how this result may be helpful.