\(\mathrm{H}_\infty\) control of linear discrete-time systems: off-policy reinforcement learning
From MaRDI portal
Publication:518303
DOI10.1016/J.AUTOMATICA.2016.12.009zbMath1357.93034OpenAlexW2580629550MaRDI QIDQ518303
Zhong-Ping Jiang, Frank L. Lewis, Bahare Kiumarsi
Publication date: 28 March 2017
Published in: Automatica (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.automatica.2016.12.009
Learning and adaptive systems in artificial intelligence (68T05) Discrete-time control/observation systems (93C55) (H^infty)-control (93B36) Linear systems in control theory (93C05)
Related Items (36)
Model-free design of stochastic LQR controller from a primal-dual optimization perspective ⋮ Output feedback Q-learning for discrete-time linear zero-sum games with application to the \(H_\infty\) control ⋮ Suboptimal control for nonlinear systems with disturbance via integral sliding mode control and policy iteration ⋮ On the effect of probing noise in optimal control LQR via Q-learning using adaptive filtering algorithms ⋮ H∞ optimal control of unknown linear systems by adaptive dynamic programming with applications to time‐delay systems ⋮ Observer‐based H∞ control of a stochastic Korteweg–de Vries–Burgers equation ⋮ Adaptive Optimal Control of Linear Discrete-Time Networked Control Systems with Two-Channel Stochastic Dropouts ⋮ Improved model-free H∞ control for batch processes via off-policy 2D game Q-learning ⋮ Optimal control of unknown discrete-time linear systems with additive noise ⋮ Robust optimal tracking control for multiplayer systems by off‐policy Q‐learning approach ⋮ Learning‐based T‐sHDP() for optimal control of a class of nonlinear discrete‐time systems ⋮ Adaptive optimization algorithm for nonlinear Markov jump systems with partial unknown dynamics ⋮ Output‐feedback Q‐learning for discrete‐time linear H∞ tracking control: A Stackelberg game approach ⋮ Model-free finite-horizon optimal control of discrete-time two-player zero-sum games ⋮ Optimal output tracking control of linear discrete-time systems with unknown dynamics by adaptive dynamic programming and output feedback ⋮ Undiscounted reinforcement learning for infinite-time optimal output tracking and disturbance rejection of discrete-time LTI systems with unknown dynamics ⋮ Control policy learning design for vehicle urban positioning via BeiDou navigation ⋮ Off-policy reinforcement learning for tracking control of discrete-time Markov jump linear systems with completely unknown dynamics ⋮ Consensus tracking control for a class of general linear hybrid multi-agent systems: a model-free approach ⋮ Security consensus control for multi-agent systems under DoS attacks via reinforcement learning method ⋮ Incremental reinforcement learning and optimal output regulation under unmeasurable disturbances ⋮ Robust H∞ tracking of linear <scp>discrete‐time</scp> systems using <scp>Q‐learning</scp> ⋮ Data-based \(\mathcal{L}_2\) gain optimal control for discrete-time system with unknown dynamics ⋮ Total stability of equilibria motivates integral action in discrete-time nonlinear systems ⋮ Off-policy inverse Q-learning for discrete-time antagonistic unknown systems ⋮ \(\mathcal{H}_\infty\) tracking learning control for discrete-time Markov jump systems: a parallel off-policy reinforcement learning ⋮ Off-policy based adaptive dynamic programming method for nonzero-sum games on discrete-time system ⋮ Improved off‐policy reinforcement learning algorithm for robust control of unmodeled nonlinear system with asymmetric state constraints ⋮ Finite‐horizon H∞ tracking control for discrete‐time linear systems ⋮ New insight into the simultaneous policy update algorithms related to \(H_\infty\) state feedback control ⋮ Boundary control of linear stochastic reaction‐diffusion systems ⋮ Tracking control optimization scheme for a class of partially unknown fuzzy systems by using integral reinforcement learning architecture ⋮ Off-policy Q-learning: solving Nash equilibrium of multi-player games with network-induced delay and unmeasured state ⋮ Model-free \(H_\infty\) tracking control for de-oiling hydrocyclone systems via off-policy reinforcement learning ⋮ Adaptive optimal tracking controls of unknown multi-input systems based on nonzero-sum game theory ⋮ Adaptive dynamic programming in the Hamiltonian-driven framework
Cites Work
- Unnamed Item
- Unnamed Item
- Computational adaptive optimal control for continuous-time linear systems with completely unknown dynamics
- Reinforcement \(Q\)-learning for optimal tracking control of linear discrete-time systems with unknown dynamics
- Model-free \(Q\)-learning designs for linear discrete-time zero-sum games with application to \(H^\infty\) control
- Reinforcement learning solution for HJB equation arising in constrained optimal control problem
- Robust and \(H_\infty\) control
- Data-based approximate policy iteration for affine nonlinear continuous-time optimal control design
- State-space solutions to standard H/sub 2/ and H/sub infinity / control problems
- Feedback and optimal sensitivity: Model reference transformations, multiplicative seminorms, and approximate inverses
- L/sub 2/-gain analysis of nonlinear systems and nonlinear state-feedback H/sub infinity / control
- The discrete-time Riccati equation related to the H/sub ∞/ control problem
- \(H^ \infty\)-optimal control and related minimax design problems. A dynamic game approach.
This page was built for publication: \(\mathrm{H}_\infty\) control of linear discrete-time systems: off-policy reinforcement learning