Fault-tolerant tracking control based on reinforcement learning with application to a steer-by-wire system
From MaRDI portal
Publication:2667426
DOI10.1016/J.JFRANKLIN.2021.12.012zbMath1483.93095OpenAlexW4200581174MaRDI QIDQ2667426
Yidong Tu, Kaibo Shi, Huan Chen, Shuping He, Hai Wang
Publication date: 4 March 2022
Published in: Journal of the Franklin Institute (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jfranklin.2021.12.012
Related Items (1)
Cites Work
- Unnamed Item
- Optimal control of unknown affine nonlinear discrete-time systems using offline-trained neural networks with proof of convergence
- Computational adaptive optimal control for continuous-time linear systems with completely unknown dynamics
- Optimal tracking control of nonlinear partially-unknown constrained-input systems using integral reinforcement learning
- Reinforcement \(Q\)-learning for optimal tracking control of linear discrete-time systems with unknown dynamics
- Reliable state feedback control system design against actuator failures
- Fault detection filter design for a class of nonlinear Markovian jumping systems with mode-dependent time-varying delays
- Linear Quadratic Tracking Control of Partially-Unknown Continuous-Time Systems Using Reinforcement Learning
This page was built for publication: Fault-tolerant tracking control based on reinforcement learning with application to a steer-by-wire system