English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Reward-Weighted Regression with Sample Reuse for Direct Policy Search in Reinforcement Learning

Hachiya, H., Peters, J., & Sugiyama, M. (2011). Reward-Weighted Regression with Sample Reuse for Direct Policy Search in Reinforcement Learning. Neural Computation, 23(11), 2798-2832. doi:10.1162/NECO_a_00199.

Item is

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Hachiya, H1, Author           
Peters, J1, 2, Author           
Sugiyama, M, Author
Affiliations:
1Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497795              
2Dept. Empirical Inference, Max Planck Institute for Intelligent Systems, Max Planck Society, ou_1497647              

Content

show
hide
Free keywords: -
 Abstract: Direct policy search is a promising reinforcement learning framework, in particular for controlling continuous, high-dimensional systems. Policy search often requires a large number of samples for obtaining a stable policy update estimator, and this is prohibitive when the sampling cost is expensive. In this letter, we extend an expectation-maximization-based policy search method so that previously collected samples can be efficiently reused. The usefulness of the proposed method, reward-weighted regression with sample reuse (R), is demonstrated through robot learning experiments.

Details

show
hide
Language(s):
 Dates: 2011-11
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: URI: http://www.mitpressjournals.org/doi/pdf/10.1162/NECO_a_00199
DOI: 10.1162/NECO_a_00199
BibTex Citekey: HachiyaPS2011
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: Neural Computation
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: 23 (11) Sequence Number: - Start / End Page: 2798 - 2832 Identifier: -