Output‐feedback H∞ quadratic tracking control of linear systems using reinforcement learning
DOI10.1002/ACS.2830zbMATH Open1417.93141OpenAlexW2766143943MaRDI QIDQ5222718FDOQ5222718
Authors: Rohollah Moghadam, Frank L. Lewis
Publication date: 10 July 2019
Published in: International Journal of Adaptive Control and Signal Processing (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1002/acs.2830
Recommendations
- Linear Quadratic Tracking Control of Partially-Unknown Continuous-Time Systems Using Reinforcement Learning
- \(H_\infty\) tracking control for linear discrete-time systems via reinforcement learning
- Output feedback reinforcement learning control for linear systems
- Linear quadratic tracking control of unknown systems: a two-phase reinforcement learning method
- Reinforcement \(Q\)-learning for optimal tracking control of linear discrete-time systems with unknown dynamics
- Adaptive optimal output tracking of continuous-time systems via output-feedback-based reinforcement learning
- Experience replay-based output feedback Q-learning scheme for optimal output tracking control of discrete-time linear systems
- Reinforcement learning for optimal feedback control. A Lyapunov-based approach
- Linear Quadratic Control Using Model-Free Reinforcement Learning
- Output‐feedback Q‐learning for discrete‐time linear H∞ tracking control: A Stackelberg game approach
optimal controloutput feedback\(H_\infty\) controllerreinforcement learning (RL)bounded \(L_2\)-gain
Learning and adaptive systems in artificial intelligence (68T05) Applications of game theory (91A80) Applications of optimal control and differential games (49N90) (H^infty)-control (93B36) Feedback control (93B52) Linear systems in control theory (93C05)
Cites Work
- Title not available (Why is that?)
- Static output feedback -- a survey
- \(H^ \infty\)-optimal control and related minimax design problems. A dynamic game approach.
- Policy Iterations on the Hamilton–Jacobi–Isaacs Equation for $H_{\infty}$ State Feedback Control With Input Saturation
- Approximate Dynamic Programming
- Online solution of nonlinear two‐player zero‐sum games using synchronous policy iteration
- Reinforcement learning. An introduction
- Adaptive optimal control for continuous-time linear systems based on policy iteration
- Linear Quadratic Tracking Control of Partially-Unknown Continuous-Time Systems Using Reinforcement Learning
- Optimal model-free output synchronization of heterogeneous systems using off-policy reinforcement learning
- Computational adaptive optimal control for continuous-time linear systems with completely unknown dynamics
- Reinforcement Learning and Feedback Control: Using Natural Decision Methods to Design Optimal Adaptive Controllers
- An iterative adaptive dynamic programming method for solving a class of nonlinear zero-sum differential games
- Optimal tracking control of nonlinear partially-unknown constrained-input systems using integral reinforcement learning
- Integral \(Q\)-learning and explorized policy iteration for adaptive optimal control of continuous-time linear systems
- Simultaneous policy update algorithms for learning the solution of linear continuous-time \(H_{\infty}\) state feedback control
- Computationally efficient simultaneous policy update algorithm for nonlinear \(H_{\infty }\) state feedback control with Galerkin's method
Cited In (23)
- Adaptive optimal output tracking of continuous-time systems via output-feedback-based reinforcement learning
- Output regulation of unknown linear systems using average cost reinforcement learning
- \(H_{\infty}\) optimal preview tracking control problem with disturbance attenuation
- Fault-tolerant tracking control based on reinforcement learning with application to a steer-by-wire system
- Output feedback adaptive dynamic programming for linear differential zero-sum games
- Output‐feedback Q‐learning for discrete‐time linear H∞ tracking control: A Stackelberg game approach
- An output feedback reinforcement learning control method based on a reference model
- Input-to-state \(\mathcal{H}_\infty\) learning of recurrent neural networks with delay and disturbance
- Optimal tracking control of nonlinear partially-unknown constrained-input systems using integral reinforcement learning
- Two‐loop reinforcement learning algorithm for finite‐horizon optimal control of continuous‐time affine nonlinear systems
- Output feedback Q-learning for discrete-time linear zero-sum games with application to the \(H_\infty\) control
- Finite‐horizon H∞ tracking control for discrete‐time linear systems
- Solution of the linear quadratic regulator problem of black box linear systems using reinforcement learning
- Editorial for the special issue on learning-based adaptive control: theory and applications
- Tracking control optimization scheme for a class of partially unknown fuzzy systems by using integral reinforcement learning architecture
- \(H_\infty\) tracking control for linear discrete-time systems via reinforcement learning
- Design of zero-sum game-based \(H_{\infty}\) optimal preview repetitive control systems with external disturbance and input delay
- Experience replay-based output feedback Q-learning scheme for optimal output tracking control of discrete-time linear systems
- Linear quadratic tracking control of unknown systems: a two-phase reinforcement learning method
- Model-free \(H_\infty\) tracking control for de-oiling hydrocyclone systems via off-policy reinforcement learning
- Undiscounted reinforcement learning for infinite-time optimal output tracking and disturbance rejection of discrete-time LTI systems with unknown dynamics
- \(\mathrm{H}_\infty\) control of linear discrete-time systems: off-policy reinforcement learning
- Adaptive output-feedback quadratic tracking control of continuous-time systems via value iteration with its application
This page was built for publication: Output‐feedback H∞ quadratic tracking control of linear systems using reinforcement learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5222718)