Data-driven approximate Q-learning stabilization with optimality error bound analysis
From MaRDI portal
Publication:1737866
DOI10.1016/j.automatica.2019.01.018zbMath1415.93219OpenAlexW2918463777WikidataQ128312460 ScholiaQ128312460MaRDI QIDQ1737866
Yuanjing Feng, Chenkun Yin, Yongqiang Li, Zhongsheng Hou, Chengzan Yang
Publication date: 24 April 2019
Published in: Automatica (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.automatica.2019.01.018
Learning and adaptive systems in artificial intelligence (68T05) Applications of optimal control and differential games (49N90) Asymptotic stability in control theory (93D20)
Related Items
On the effect of probing noise in optimal control LQR via Q-learning using adaptive filtering algorithms ⋮ Optimal control for unknown mean-field discrete-time system based on Q-Learning
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Reinforcement \(Q\)-learning for optimal tracking control of linear discrete-time systems with unknown dynamics
- Model-free \(H_{\infty }\) control design for unknown linear discrete-time systems via Q-learning with LMI
- Complete stability analysis of a heuristic approximate dynamic programming control design
- A boundedness result for the direct heuristic dynamic programming
- Adaptive-resolution reinforcement learning with polynomial exploration in deterministic domains
- Data-driven asymptotic stabilization for discrete-time nonlinear systems