Output feedback Q-learning for discrete-time linear zero-sum games with application to the \(H_\infty\) control
DOI10.1016/J.AUTOMATICA.2018.05.027zbMath1402.93126OpenAlexW2807176303MaRDI QIDQ1626885
Syed Ali Asad Rizvi, Zongli Lin
Publication date: 21 November 2018
Published in: Automatica (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.automatica.2018.05.027
Learning and adaptive systems in artificial intelligence (68T05) Feedback control (93B52) 2-person games (91A05) Application models in control theory (93C95) Discrete-time control/observation systems (93C55) (H^infty)-control (93B36) Linear systems in control theory (93C05)
Related Items (16)
Cites Work
- Unnamed Item
- \(\mathrm{H}_\infty\) control of linear discrete-time systems: off-policy reinforcement learning
- Model-free \(Q\)-learning designs for linear discrete-time zero-sum games with application to \(H^\infty\) control
- \({\mathcal Q}\)-learning
- Neural-network-observer-based optimal control for unknown nonlinear systems using adaptive dynamic programming
- Adaptive dynamic programming for online solution of a zero-sum differential game
- Stability Analysis of Discrete-Time Infinite-Horizon Optimal Control With Discounted Cost
This page was built for publication: Output feedback Q-learning for discrete-time linear zero-sum games with application to the \(H_\infty\) control