English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Reward-Weighted Regression with Sample Reuse for Direct Policy Search in Reinforcement Learning

Hachiya, H., Peters, J., & Sugiyama, M. (2011). Reward-Weighted Regression with Sample Reuse for Direct Policy Search in Reinforcement Learning. Neural Computation, 23(11), 2798-2832. doi:10.1162/NECO_a_00199.

Item is

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Hachiya, H.1, Author
Peters, J.2, Author           
Sugiyama, M., Author
Affiliations:
1Max Planck Society, ou_persistent13              
2Dept. Empirical Inference, Max Planck Institute for Intelligent Systems, Max Planck Society, ou_1497647              

Content

show
hide
Free keywords: MPI für Intelligente Systeme; Abt. Schölkopf;
 Abstract: Direct policy search is a promising reinforcement learning framework, in particular for controlling continuous, high-dimensional systems. Policy search often requires a large number of samples for obtaining a stable policy update estimator, and this is prohibitive when the sampling cost is expensive. In this letter, we extend an expectation-maximization-based policy search method so that previously collected samples can be efficiently reused. The usefulness of the proposed method, reward-weighted regression with sample reuse (R), is demonstrated through robot learning experiments.

Details

show
hide
Language(s):
 Dates: 2011-11-01
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: eDoc: 596124
Other: HachiyaPS2011
DOI: 10.1162/NECO_a_00199
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: Neural Computation
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: -
Pages: 34 Volume / Issue: 23 (11) Sequence Number: - Start / End Page: 2798 - 2832 Identifier: -