日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細

  Towards Motor Skill Learning in Robotics

Peters, J. (2007). Towards Motor Skill Learning in Robotics. Talk presented at RSS 2008 Workshop: Interactive Robot Learning (IRL 2008). Zürich, Switzerland. 2007-06-28.

Item is

基本情報

表示: 非表示:
資料種別: 講演

ファイル

表示: ファイル

作成者

表示:
非表示:
 作成者:
Peters, J1, 2, 著者           
所属:
1Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497795              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              

内容説明

表示:
非表示:
キーワード: -
 要旨: Autonomous robots that can assist humans in situations of daily life have been a long standing vision of robotics, artificial intelligence, and cognitive sciences. A first step towards this goal is to create robots that can learn tasks triggered by environmental context or higher level instruction. However, learning techniques have yet to live up to this promise as only few methods manage to scale
to high-dimensional manipulator or humanoid robots. In this talk, we investigate a general framework suitable for learning motor skills in robotics which is 3 based on the principles behind many analytical robotics approaches. It involves generating a representation of motor skills by parameterized motor primitive policies acting as building blocks of movement generation, and a learned task execution module that transforms these movements into motor commands.
Learning parameterized motor primitives usually requires reward-related self-improvement, i.e., reinforcement learning. We propose a new, task-appropriate architecture, the Natural Actor-Critic. This algorithm is based on natural policy gradient updates for the actor while the critic estimates the natural policy gradient. Empirical evaluations illustrate the effectiveness and applicability to learning control on an anthropomorphic robot arm.
For the proper execution of motion, we need to learn how to realize the behavior prescribed by the motor primitives in their respective task space through the generation of motor commands. This transformation corresponds to solving the classical problem of operational space control through machine learning techniques. Such robot control problems can be reformulated as immediate reward reinforcement learning problems. We derive an EM-based reinforcement
learning algorithm which reduces the problem of learning with immediate rewards to a reward-weighted regression problem. The resulting algorithm learns smoothly without dangerous jumps in solution space, and works well in application to complex high degree-of-freedom robots.

資料詳細

表示:
非表示:
言語:
 日付: 2007-06
 出版の状態: オンラインで出版済み
 ページ: -
 出版情報: -
 目次: -
 査読: -
 識別子(DOI, ISBNなど): BibTex参照ID: 5669
 学位: -

関連イベント

表示:
非表示:
イベント名: RSS 2008 Workshop: Interactive Robot Learning (IRL 2008)
開催地: Zürich, Switzerland
開始日・終了日: 2007-06-28
招待講演: 招待

訴訟

表示:

Project information

表示:

出版物 1

表示:
非表示:
出版物名: Interactive Robot Learning - RSS 2008 Workshop
種別: 会議論文集
 著者・編者:
所属:
出版社, 出版地: -
ページ: - 巻号: - 通巻号: - 開始・終了ページ: 3 - 4 識別子(ISBN, ISSN, DOIなど): -