Efficient sample reuse in policy gradients with parameter-based exploration

From MaRDI portal
Publication:5378202

DOI10.1162/NECO_A_00452zbMATH Open1414.68090arXiv1301.3966OpenAlexW2133224499WikidataQ47904761 ScholiaQ47904761MaRDI QIDQ5378202FDOQ5378202

Voot Tangkaratt, Tingting Zhao, Masashi Sugiyama, Jun Morimoto, Hirotaka Hachiya

Publication date: 12 June 2019

Published in: Neural Computation (Search for Journal in Brave)

Abstract: The policy gradient approach is a flexible and powerful reinforcement learning method particularly for problems with continuous actions such as robot control. A common challenge in this scenario is how to reduce the variance of policy gradient estimates for reliable policy updates. In this paper, we combine the following three ideas and give a highly effective policy gradient method: (a) the policy gradients with parameter based exploration, which is a recently proposed policy search method with low variance of gradient estimates, (b) an importance sampling technique, which allows us to reuse previously gathered data in a consistent way, and (c) an optimal baseline, which minimizes the variance of gradient estimates with their unbiasedness being maintained. For the proposed method, we give theoretical analysis of the variance of gradient estimates and show its usefulness through extensive experiments.


Full work available at URL: https://arxiv.org/abs/1301.3966




Recommendations



Cites Work


Cited In (6)





This page was built for publication: Efficient sample reuse in policy gradients with parameter-based exploration

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5378202)