An active exploration method for data efficient reinforcement learning
From MaRDI portal
Publication:2299097
DOI10.2478/AMCS-2019-0026zbMATH Open1430.93072OpenAlexW2957466728WikidataQ127540608 ScholiaQ127540608MaRDI QIDQ2299097FDOQ2299097
Authors: Dongfang Zhao, Jiafeng Liu, Rui Wu, Dansong Cheng, Xianglong Tang
Publication date: 20 February 2020
Published in: International Journal of Applied Mathematics and Computer Science (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.2478/amcs-2019-0026
Recommendations
- Efficient exploration through active learning for value function approximation in reinforcement learning
- A generalized path integral control approach to reinforcement learning
- Dual control for approximate Bayesian reinforcement learning
- Efficient sample reuse in policy gradients with parameter-based exploration
- Safe Exploration of State and Action Spaces in Reinforcement Learning
Cites Work
Cited In (4)
- Data-driven online modelling for a UGI gasification process using modified lazy learning with a relevance vector machine
- An information based approach to stochastic control problems
- Efficient exploration through active learning for value function approximation in reinforcement learning
- A linear programming methodology for approximate dynamic programming
Uses Software
This page was built for publication: An active exploration method for data efficient reinforcement learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2299097)