日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

学術論文

Natural Actor-Critic

MPS-Authors
/persons/resource/persons84135

Peters,  J
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)
公開されているフルテキストはありません
付随資料 (公開)
There is no public supplementary material available
引用

Peters, J., & Schaal, S. (2008). Natural Actor-Critic. Neurocomputing, 71(7-9), 1180-1190. doi:10.1016/j.neucom.2007.11.026.


引用: https://hdl.handle.net/11858/00-001M-0000-0013-CA1F-2
要旨
In this paper, we suggest a novel reinforcement learning architecture, the Natural
Actor-Critic. The actor updates are achieved using stochastic policy gradients em-
ploying Amari’s natural gradient approach, while the critic obtains both the natural
policy gradient and additional parameters of a value function simultaneously by lin-
ear regression. We show that actor improvements with natural policy gradients are
particularly appealing as these are independent of coordinate frame of the chosen
policy representation, and can be estimated more efficiently than regular policy gra-
dients. The critic makes use of a special basis function parameterization motivated
by the policy-gradient compatible function approximation. We show that several
well-known reinforcement learning methods such as the original Actor-Critic and
Bradtke’s Linear Quadratic Q-Learning are in fact Natural Actor-Critic algorithms.
Empirical evaluations illustrate the effectiveness of our techniques in comparison to
previous methods, and also demonstrate their applicability for learning control on
an anthropomorphic robot arm.