English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Acquiring Target Stacking Skills by Goal-Parameterized Deep Reinforcement Learning

MPS-Authors
/persons/resource/persons85396

Li,  Wenbin
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

/persons/resource/persons44451

Fritz,  Mario
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:1711.00267.pdf
(Preprint), 445KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Li, W., Bohg, J., & Fritz, M. (2017). Acquiring Target Stacking Skills by Goal-Parameterized Deep Reinforcement Learning. Retrieved from http://arxiv.org/abs/1711.00267.


Cite as: https://hdl.handle.net/21.11116/0000-0000-4345-7
Abstract
Understanding physical phenomena is a key component of human intelligence and enables physical interaction with previously unseen environments. In this paper, we study how an artificial agent can autonomously acquire this intuition through interaction with the environment. We created a synthetic block stacking environment with physics simulation in which the agent can learn a policy end-to-end through trial and error. Thereby, we bypass to explicitly model physical knowledge within the policy. We are specifically interested in tasks that require the agent to reach a given goal state that may be different for every new trial. To this end, we propose a deep reinforcement learning framework that learns policies which are parametrized by a goal. We validated the model on a toy example navigating in a grid world with different target positions and in a block stacking task with different target structures of the final tower. In contrast to prior work, our policies show better generalization across different goals.