English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Reinforcement Learning for Humanoid Robotics

Peters, J., Vijayakumar, S., & Schaal, S. (2003). Reinforcement Learning for Humanoid Robotics. In 3rd IEEE-RAS International Conference on Humanoid Robots (Humanoids2003) (pp. 1-20).

Item is

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Peters, J1, 2, Author           
Vijayakumar, S, Author
Schaal, S, Author
Affiliations:
1Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497795              
2Dept. Empirical Inference, Max Planck Institute for Intelligent Systems, Max Planck Society, ou_1497647              

Content

show
hide
Free keywords: -
 Abstract: Reinforcement learning offers one of the most general frame- work to take traditional robotics towards true autonomy and versatility. However, applying reinforcement learning to high dimensional movement systems like humanoid robots remains an unsolved problem. In this pa- per, we discuss different approaches of reinforcement learning in terms of their applicability in humanoid robotics. Methods can be coarsely clas- sified into three different categories, i.e., greedy methods, `vanilla‘ policy gradient methods, and natural gradient methods. We discuss that greedy methods are not likely to scale into the domain humanoid robotics as they are problematic when used with function approximation. `Vanilla‘ policy gradient methods on the other hand have been successfully ap- plied on real-world robots including at least one humanoid robot [3]. We demonstrate that these methods can be significantly improved us- ing the natural policy gradient instead of the regular policy gradient. A derivation of the natural policy gradient is provided, proving that the av- erage policy gradient of Kakade [10] is indeed the true natural gradient. A general algorithm for estimating the natural gradient, the Natural Actor-Critic algorithm, is introduced. This algorithm converges to the nearest local minimum of the cost function with respect to the Fisher in- formation metric under suitable conditions. The algorithm outperforms non-natural policy gradients by far in a cart-pole balancing evaluation, and for learning nonlinear dynamic motor primitives for humanoid robot control. It offers a promising route for the development of reinforcement learning for truly high-dimensionally continuous state-action systems.

Details

show
hide
Language(s):
 Dates: 2003-09
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: BibTex Citekey: 5057
 Degree: -

Event

show
hide
Title: 3rd IEEE-RAS International Conference on Humanoid Robots (Humanoids2003)
Place of Event: Karlsruhe, Germany
Start-/End Date: -

Legal Case

show

Project information

show

Source 1

show
hide
Title: 3rd IEEE-RAS International Conference on Humanoid Robots (Humanoids2003)
Source Genre: Proceedings
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: 1 - 20 Identifier: -