日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細

登録内容を編集ファイル形式で保存
 
 
ダウンロード電子メール
  Reinforcement Learning for Humanoid Robotics

Peters, J., Vijayakumar, S., & Schaal, S. (2003). Reinforcement Learning for Humanoid Robotics. In Humanoids 2003: Third IEEE-RAS InternationalConference on Humanoid Robots (pp. 1-20). Düsseldorf, Germany: VDI/VDE-GMA.

Item is

基本情報

表示: 非表示:
資料種別: 会議論文

ファイル

表示: ファイル

関連URL

表示:
非表示:
説明:
-
OA-Status:

作成者

表示:
非表示:
 作成者:
Peters, J1, 著者           
Vijayakumar, S, 著者
Schaal, S1, 著者           
所属:
1External Organizations, ou_persistent22              

内容説明

表示:
非表示:
キーワード: -
 要旨: Reinforcement learning offers one of the most general frame-work to take traditional robotics towards true autonomy and versatility.
However, applying reinforcement learning to high dimensional movement
systems like humanoid robots remains an unsolved problem. In this pa-
per, we discuss different approaches of reinforcement learning in terms of
their applicability in humanoid robotics. Methods can be coarsely clas-
sified into three different categories, i.e., greedy methods, `vanilla‘ policy
gradient methods, and natural gradient methods. We discuss that greedy
methods are not likely to scale into the domain humanoid robotics as
they are problematic when used with function approximation. `Vanilla‘
policy gradient methods on the other hand have been successfully ap-
plied on real-world robots including at least one humanoid robot [3].
We demonstrate that these methods can be significantly improved us-
ing the natural policy gradient instead of the regular policy gradient. A
derivation of the natural policy gradient is provided, proving that the av-
erage policy gradient of Kakade [10] is indeed the true natural gradient.
A general algorithm for estimating the natural gradient, the Natural
Actor-Critic algorithm, is introduced. This algorithm converges to the
nearest local minimum of the cost function with respect to the Fisher in-
formation metric under suitable conditions. The algorithm outperforms
non-natural policy gradients by far in a cart-pole balancing evaluation,
and for learning nonlinear dynamic motor primitives for humanoid robot
control. It offers a promising route for the development of reinforcement
learning for truly high-dimensionally continuous state-action systems.

資料詳細

表示:
非表示:
言語:
 日付: 2003-09
 出版の状態: 出版
 ページ: -
 出版情報: -
 目次: -
 査読: -
 識別子(DOI, ISBNなど): BibTex参照ID: 5057
 学位: -

関連イベント

表示:
非表示:
イベント名: 3rd IEEE-RAS International Conference on Humanoid Robots (ICHR 2003)
開催地: Karlsruhe, Germany
開始日・終了日: 2003-09-29 - 2003-09-30

訴訟

表示:

Project information

表示:

出版物 1

表示:
非表示:
出版物名: Humanoids 2003: Third IEEE-RAS InternationalConference on Humanoid Robots
種別: 会議論文集
 著者・編者:
所属:
出版社, 出版地: Düsseldorf, Germany : VDI/VDE-GMA
ページ: - 巻号: - 通巻号: - 開始・終了ページ: 1 - 20 識別子(ISBN, ISSN, DOIなど): ISBN: 3-00-012047-5