Output‐feedback Q‐learning for discrete‐time linear H∞ tracking control: A Stackelberg game approach
From MaRDI portal
Publication:6090136
Recommendations
- Output feedback Q-learning for discrete-time linear zero-sum games with application to the \(H_\infty\) control
- \(H_\infty\) tracking control for linear discrete-time systems via reinforcement learning
- Robust H∞ tracking of linear <scp>discrete‐time</scp> systems using <scp>Q‐learning</scp>
- Output‐feedback H∞ quadratic tracking control of linear systems using reinforcement learning
Cites work
- scientific article; zbMATH DE number 1001726 (Why is no real title available?)
- scientific article; zbMATH DE number 7385629 (Why is no real title available?)
- scientific article; zbMATH DE number 7385966 (Why is no real title available?)
- A data‐based private learning framework for enhanced security against replay attacks in cyber‐physical systems
- Adaptive Dynamic Programming and Adaptive Optimal Output Regulation of Linear Systems
- Adaptive optimal control for continuous-time linear systems based on policy iteration
- Adaptive optimization algorithm for nonlinear Markov jump systems with partial unknown dynamics
- Differential Games
- Differential graphical games for \(H_\infty\) control of linear heterogeneous multiagent systems
- Finite-Time Distributed Tracking Control for Multi-Agent Systems With a Virtual Leader
- Finite-frequency \(H_-/H_\infty\) unknown input observer-based distributed fault detection for multi-agent systems
- LQ Synchronization of Discrete-Time Multiagent Systems: A Distributed Optimization Approach
- Linear Quadratic Tracking Control of Partially-Unknown Continuous-Time Systems Using Reinforcement Learning
- Model-free \(H_{\infty }\) control design for unknown linear discrete-time systems via Q-learning with LMI
- Model-free Q-learning designs for linear discrete-time zero-sum games with application to H^ control
- Online actor-critic algorithm to solve the continuous-time infinite horizon optimal control problem
- Optimal, constant I/O similarity scaling for full-information and state- feedback control problems
- Output feedback Q-learning for discrete-time linear zero-sum games with application to the \(H_\infty\) control
- Output‐feedback H∞ quadratic tracking control of linear systems using reinforcement learning
- Reinforcement Learning-Based Adaptive Optimal Exponential Tracking Control of Linear Systems With Unknown Dynamics
- Reinforcement \(Q\)-learning for optimal tracking control of linear discrete-time systems with unknown dynamics
- Reinforcement learning. An introduction
- Robust adaptive dynamic programming for linear and nonlinear systems: an overview
- Safe reinforcement learning for dynamical games
- Stability Analysis of Discrete-Time Infinite-Horizon Optimal Control With Discounted Cost
- The discrete-time Riccati equation related to the H/sub ∞/ control problem
- \(\mathcal H_{\infty }\)-filtering for singularly perturbed nonlinear systems
- \(\mathrm{H}_\infty\) control of linear discrete-time systems: off-policy reinforcement learning
Cited in
(6)- Security consensus control for multi-agent systems under DoS attacks via reinforcement learning method
- Robust H∞ tracking of linear <scp>discrete‐time</scp> systems using <scp>Q‐learning</scp>
- Finite-horizon Q-learning for discrete-time zero-sum games with application to \(H_{\infty}\) control
- Robust optimal tracking control for multiplayer systems by off‐policy Q‐learning approach
- Output‐feedback H∞ quadratic tracking control of linear systems using reinforcement learning
- Model-free Q-learning designs for linear discrete-time zero-sum games with application to H^ control
This page was built for publication: Output‐feedback Q‐learning for discrete‐time linear H∞ tracking control: A Stackelberg game approach
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6090136)