Learning output reference model tracking for higher-order nonlinear systems with unknown dynamics
DOI10.3390/A12060121zbMATH Open1467.93147OpenAlexW2949426592MaRDI QIDQ2004902FDOQ2004902
Authors: Timotei Lala, Mircea-Bogdan Rădac
Publication date: 7 October 2020
Published in: Algorithms (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.3390/a12060121
Recommendations
- Nonlinear robust approximate optimal tracking control based on adaptive dynamic programming
- Output feedback tracking control of a class of continuous-time nonlinear systems via adaptive dynamic programming approach
- Adaptive dynamic programming for model-free tracking of trajectories with time-varying parameters
- Optimal tracking control based on approximate dynamic programming for unknown system
- Optimal output tracking control of linear discrete-time systems with unknown dynamics by adaptive dynamic programming and output feedback
neural networksapproximate dynamic programmingreinforcement learningdata-driven controlmodel-free controllearning systemsmultivariable controlreference trajectory trackingaerodynamic rotor systemoutput reference model
Learning and adaptive systems in artificial intelligence (68T05) Dynamic programming (90C39) Feedback control (93B52) Nonlinear systems in control theory (93C10)
Cites Work
- Title not available (Why is that?)
- Reinforcement learning. An introduction
- Virtual reference feedback tuning: A direct method for the design of feedback controllers
- Iterative feedback tuning—an overview
- Reinforcement Learning and Feedback Control: Using Natural Decision Methods to Design Optimal Adaptive Controllers
- Title not available (Why is that?)
- Reinforcement \(Q\)-learning for optimal tracking control of linear discrete-time systems with unknown dynamics
- Virtual reference feedback tuning for non-minimum phase plants
- On ramp metering: towards a better understanding of ALINEA via model-free control
- Model-free control performance improvement using virtual reference feedback tuning and reinforcement Q-learning
- Data‐driven multivariable ILC: enhanced performance by eliminating L and Q filters
- Data-driven adaptive dynamic programming for partially observable nonzero-sum games via \(Q\)-learning method
- Model-free adaptive control design for nonlinear discrete-time processes with reinforcement learning techniques
This page was built for publication: Learning output reference model tracking for higher-order nonlinear systems with unknown dynamics
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2004902)