日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細

登録内容を編集ファイル形式で保存
 
 
ダウンロード電子メール
  Recurrent neural networks from learning attractor dynamics

Schaal, S., & Peters, J. (2003). Recurrent neural networks from learning attractor dynamics. Talk presented at NIPS 2003 Workshop on RNNaissance: Recurrent Neural Networks. Whistler, BC, Canada. 2003-12-12.

Item is

基本情報

表示: 非表示:
資料種別: 講演

ファイル

表示: ファイル

作成者

表示:
非表示:
 作成者:
Schaal, S, 著者           
Peters, J1, 著者           
所属:
1External Organizations, ou_persistent22              

内容説明

表示:
非表示:
キーワード: -
 要旨: Many forms of recurrent neural networks can be understood in terms of
dynamic systems theory of difference equations or differential
equations. Learning in such systems corresponds to adjusting some
internal parameters to obtain a desired time evolution of the network,
which can usually be characterized in term of point attractor dynamics,
limit cycle dynamics, or, in some more rare cases, as strange attractor
or chaotic dynamics. Finding a stable learning process to adjust the
open parameters of the network towards shaping the desired attractor
type and basin of attraction has remain a complex task, as the
parameter trajectories during learning can lead the system through a
variety of undesirable unstable behaviors, such that learning may never
succeed.

In this presentation, we review a recently developed learning framework
for a class of recurrent neural networks that employs a more structured
network approach. We assume that the canonical system behavior is known
a priori, e.g., it is a point attractor or a limit cycle. With either
supervised learning or reinforcement learning, it is possible to
acquire the transformation from a simple representative of this
canonical behavior (e.g., a 2nd order linear point attractor, or a
simple limit cycle oscillator) to the desired highly complex attractor
form. For supervised learning, one shot learning based on locally
weighted regression techniques is possible. For reinforcement learning,
stochastic policy gradient techniques can be employed. In any case, the
recurrent network learned by these methods inherits the stability
properties of the simple dynamic system that underlies the nonlinear
transformation, such that stability of the learning approach is not a
problem. We demonstrate the success of this approach for learning
various skills on a humanoid robot, including tasks that require to
incorporate additional sensory signals as coupling terms to modify the
recurrent network evolution on-line.

資料詳細

表示:
非表示:
言語:
 日付: 2003-12
 出版の状態: オンラインで出版済み
 ページ: -
 出版情報: -
 目次: -
 査読: -
 識別子(DOI, ISBNなど): BibTex参照ID: SchaalP2003
 学位: -

関連イベント

表示:
非表示:
イベント名: NIPS 2003 Workshop on RNNaissance: Recurrent Neural Networks
開催地: Whistler, BC, Canada
開始日・終了日: 2003-12-12
招待講演: 招待

訴訟

表示:

Project information

表示:

出版物

表示: