Convergence and robustness of bounded recurrent neural networks for solving dynamic Lyapunov equations
From MaRDI portal
Publication:6154480
DOI10.1016/j.ins.2021.12.039OpenAlexW4200217955MaRDI QIDQ6154480
Guancheng Wang, Long Jin, Zhihao Hao, Bob Zhang
Publication date: 15 February 2024
Published in: Information Sciences (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.ins.2021.12.039
robustnessrecurrent neural networkfinite-time convergencebounded activation functionsdynamic Lyapunov equations
Artificial neural networks and deep learning (68T07) Numerical optimization and variational techniques (65K10) Smooth dynamical systems: general theory (37C99)
Cites Work
- Unnamed Item
- Unnamed Item
- Sliding mode control: a survey with applications in math
- Improved gradient-based neural networks for online solution of Lyapunov matrix equation
- Modified Newton integration algorithm with noise suppression for online dynamic nonlinear optimization
- A noise-suppressing Newton-Raphson iteration algorithm for solving the time-varying Lyapunov equation and robotic tracking problems
- Numerical algorithms for solving discrete Lyapunov tensor equation
- A parallel computing method based on zeroing neural networks for time-varying complex-valued matrix Moore-Penrose inversion
- Noise-Tolerant ZNN Models for Solving Time-Varying Zero-Finding Problems: A Control-Theoretic Approach
- A Cyclic Low-Rank Smith Method for Large Sparse Lyapunov Equations
- Lyapunov-Equation-Based Stability Analysis for Switched Linear Systems and Its Application to Switched Adaptive Control
- Zhang-Gradient Control
- Comprehensive study on complex-valued ZNN models activated by novel nonlinear functions for dynamic complex linear equations
This page was built for publication: Convergence and robustness of bounded recurrent neural networks for solving dynamic Lyapunov equations