Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

Predictive Representations for Policy Gradient in POMDPs

MPG-Autoren
Es sind keine MPG-Autoren in der Publikation vorhanden
Externe Ressourcen
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Boularias, A., & Chaib-draa, B. (2009). Predictive Representations for Policy Gradient in POMDPs. In A. Danyluk, L. Bottou, & M. Littman (Eds.), ICML '09: Proceedings of the 26th Annual International Conference on Machine Learning (pp. 65-72). New York, NY, USA: ACM Press.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-C4A1-C
Zusammenfassung
We consider the problem of estimating
the policy gradient in Partially Observable
Markov Decision Processes (POMDPs) with
a special class of policies that are based
on Predictive State Representations (PSRs).
We compare PSR policies to Finite-State
Controllers (FSCs), which are considered as a
standard model for policy gradient methods
in POMDPs. We present a general Actor-
Critic algorithm for learning both FSCs and
PSR policies. The critic part computes a
value function that has as variables the parameters
of the policy. These latter parameters
are gradually updated to maximize the value
function. We show that the value function
is polynomial for both FSCs and PSR policies,
with a potentially smaller degree in the
case of PSR policies. Therefore, the value
function of a PSR policy can have less local
optima than the equivalent FSC, and consequently,
the gradient algorithm is more likely
to converge to a global optimal solution.