Reinforcement \(Q\)-learning for optimal tracking control of linear discrete-time systems with unknown dynamics
From MaRDI portal
Publication:463893
DOI10.1016/j.automatica.2014.02.015zbMath1417.93134OpenAlexW2013895638MaRDI QIDQ463893
Hamidreza Modares, Mohammad-Bagher Naghibi-Sistani, Bahare Kiumarsi, Ali Karimpour, Frank L. Lewis
Publication date: 17 October 2014
Published in: Automatica (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.automatica.2014.02.015
Lua error in Module:PublicationMSCList at line 37: attempt to index local 'msc_result' (a nil value).
Related Items (62)
Online data-enabled predictive control ⋮ Online identifier–actor–critic algorithm for optimal control of nonlinear systems ⋮ Inversion-based output tracking and unknown input reconstruction of square discrete-time linear systems ⋮ Data-driven optimal tracking control of discrete-time linear systems with multiple delays via the value iteration algorithm ⋮ Neural-network-based stochastic linear quadratic optimal tracking control scheme for unknown discrete-time systems using adaptive dynamic programming ⋮ ADP based optimal tracking control for a class of linear discrete-time system with multiple delays ⋮ Efficient model-based reinforcement learning for approximate online optimal control ⋮ Model-free finite-horizon optimal tracking control of discrete-time linear systems ⋮ An integrated data-driven Markov parameters sequence identification and adaptive dynamic programming method to design fault-tolerant optimal tracking control for completely unknown model systems ⋮ Data-driven optimal tracking control for discrete-time systems with delays using adaptive dynamic programming ⋮ A novel adaptive dynamic programming based on tracking error for nonlinear discrete-time systems ⋮ Fault-tolerant tracking control based on reinforcement learning with application to a steer-by-wire system ⋮ Output synchronization of heterogeneous discrete-time systems: a model-free optimal approach ⋮ Simultaneous identification and optimal tracking control of unknown continuous-time systems with actuator constraints ⋮ Robust min-max optimal control design for systems with uncertain models: a neural dynamic programming approach ⋮ Stable image representation based stability performance monitoring and recovery for feedback control systems ⋮ Observer‐based adaptive controller design for chaos synchronization using Bernstein‐type operators ⋮ Optimal control of unknown discrete-time linear systems with additive noise ⋮ Robust optimal tracking control for multiplayer systems by off‐policy Q‐learning approach ⋮ Learning‐based T‐sHDP() for optimal control of a class of nonlinear discrete‐time systems ⋮ Model‐free adaptive tracking control for networked nonlinear systems with data dropout ⋮ Output‐feedback Q‐learning for discrete‐time linear H∞ tracking control: A Stackelberg game approach ⋮ Nonlinear control using human behavior learning ⋮ Model-free optimal tracking policies for Markov jump systems by solving non-zero-sum games ⋮ Optimal output tracking control of linear discrete-time systems with unknown dynamics by adaptive dynamic programming and output feedback ⋮ Reinforcement learning and cooperative \(H_\infty\) output regulation of linear continuous-time multi-agent systems ⋮ Undiscounted reinforcement learning for infinite-time optimal output tracking and disturbance rejection of discrete-time LTI systems with unknown dynamics ⋮ Specified convergence rate guaranteed output tracking of discrete-time systems via reinforcement learning ⋮ Control policy learning design for vehicle urban positioning via BeiDou navigation ⋮ Model-based reinforcement learning for approximate optimal regulation ⋮ Robust H∞ tracking of linear <scp>discrete‐time</scp> systems using <scp>Q‐learning</scp> ⋮ Non-zero sum Nash Q-learning for unknown deterministic continuous-time linear systems ⋮ \(\mathcal{H}_\infty\) tracking learning control for discrete-time Markov jump systems: a parallel off-policy reinforcement learning ⋮ Optimal tracking control for discrete‐time modal persistent dwell time switched systems based on Q‐learning ⋮ Optimal trajectory tracking control for a class of nonlinear nonaffine systems via generalized N‐step value gradient learning ⋮ Data‐driven control for networked systems with multiple packet dropouts ⋮ Finite‐horizon H∞ tracking control for discrete‐time linear systems ⋮ Improved value iteration for nonlinear tracking control with accelerated learning ⋮ Solution of the linear quadratic regulator problem of black box linear systems using reinforcement learning ⋮ Event-triggered optimal tracking control of nonlinear systems ⋮ Experience replay–based output feedback Q‐learning scheme for optimal output tracking control of discrete‐time linear systems ⋮ Reinforcement learning for a class of continuous-time input constrained optimal control problems ⋮ Optimal tracking control of nonlinear partially-unknown constrained-input systems using integral reinforcement learning ⋮ A Q-learning predictive control scheme with guaranteed stability ⋮ Discrete-time dynamical maximum power tracking control for a vertical axis water turbine with retractable blades ⋮ Data-driven approximate Q-learning stabilization with optimality error bound analysis ⋮ Q-learning for continuous-time linear systems: A model-free infinite horizon optimal control approach ⋮ Optimal distributed synchronization control for continuous-time heterogeneous multi-agent differential graphical games ⋮ \(\mathrm{H}_\infty\) control of linear discrete-time systems: off-policy reinforcement learning ⋮ Planning for optimal control and performance certification in nonlinear systems with controlled or uncontrolled switches ⋮ Learning output reference model tracking for higher-order nonlinear systems with unknown dynamics ⋮ Tracking control optimization scheme for a class of partially unknown fuzzy systems by using integral reinforcement learning architecture ⋮ Online adaptive policy iteration based fault-tolerant control algorithm for continuous-time nonlinear tracking systems with actuator failures ⋮ Finite-horizon optimal tracking guidance for aircraft based on approximate dynamic programming ⋮ Model-free \(H_\infty\) tracking control for de-oiling hydrocyclone systems via off-policy reinforcement learning ⋮ Robust optimal control for finite-horizon zero-sum differential games via a plug-n-play event-triggered scheme ⋮ Online optimal and adaptive integral tracking control for varying discrete‐time systems using reinforcement learning ⋮ Optimal control of a class of nonlinear stochastic systems ⋮ Adaptive dynamic programming for model‐free tracking of trajectories with time‐varying parameters ⋮ Mixed density methods for approximate dynamic programming ⋮ Dissipativity-based verification for autonomous systems in adversarial environments ⋮ Computational intelligence in uncertainty quantification for learning control and differential games
Cites Work
- Unnamed Item
- Unnamed Item
- Computational adaptive optimal control for continuous-time linear systems with completely unknown dynamics
- An iterative adaptive dynamic programming method for solving a class of nonlinear zero-sum differential games
- Integral \(Q\)-learning and explorized policy iteration for adaptive optimal control of continuous-time linear systems
- Model-free \(Q\)-learning designs for linear discrete-time zero-sum games with application to \(H^\infty\) control
- Adaptive optimal control for continuous-time linear systems based on policy iteration
- Optimal Adaptive Control and Differential Games by Reinforcement Learning Principles
- Reinforcement Learning and Feedback Control: Using Natural Decision Methods to Design Optimal Adaptive Controllers
This page was built for publication: Reinforcement \(Q\)-learning for optimal tracking control of linear discrete-time systems with unknown dynamics