Computing Lyapunov functions using deep neural networks
From MaRDI portal
Publication:2043422
DOI10.3934/jcd.2021006OpenAlexW3111609376MaRDI QIDQ2043422
Publication date: 2 August 2021
Published in: Journal of Computational Dynamics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2005.08965
stabilityLyapunov functioncurse of dimensionalitysmall-gain conditiondeep neural networktraining algorithm
Artificial neural networks and deep learning (68T07) Stability of solutions to ordinary differential equations (34D20)
Related Items (4)
Discovering efficient periodic behaviors in mechanical systems via neural approximators ⋮ Mini-workshop: Analysis of data-driven optimal control. Abstracts from the mini-workshop held May 9--15, 2021 (hybrid meeting) ⋮ Low-rank kernel approximation of Lyapunov functions using neural networks ⋮ Learning dynamical systems using local stability priors
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Review on computational methods for Lyapunov functions
- Advances in computational Lyapunov analysis using sum-of-squares programming
- On a small gain theorem for ISS networks in dissipative Lyapunov form
- Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations
- Overcoming the curse of dimensionality for some Hamilton-Jacobi partial differential equations via neural network architectures
- Multilayer feedforward networks are universal approximators
- Small-gain theorem for ISS systems and applications
- A Lyapunov formulation of the nonlinear small-gain theorem for interconnected ISS systems
- DGM: a deep learning algorithm for solving partial differential equations
- Overcoming the curse of dimensionality in the numerical approximation of Allen-Cahn partial differential equations via truncated full-history recursive multilevel Picard approximations
- A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations
- Nearly optimal control laws for nonlinear systems with saturating actuators using a neural network HJB approach
- A regularization of Zubov’s equation for robust domains of attraction
- Computation of Lyapunov functions for nonlinear discrete time systems by linear programming
- Small Gain Theorems for Large Scale Systems and Construction of ISS Lyapunov Functions
- Large-Scale Machine Learning with Stochastic Gradient Descent
- Optimization Methods for Large-Scale Machine Learning
- Smooth stabilization implies coprime factorization
- Deep backward schemes for high-dimensional nonlinear PDEs
- Solving high-dimensional partial differential equations using deep learning
- Analysis of the Generalization Error: Empirical Risk Minimization over Deep Artificial Neural Networks Overcomes the Curse of Dimensionality in the Numerical Approximation of Black--Scholes Partial Differential Equations
- Rectified deep neural networks overcome the curse of dimensionality for nonsmooth value functions in zero-sum games of nonlinear stiff systems
- Approximation by superpositions of a sigmoidal function
This page was built for publication: Computing Lyapunov functions using deep neural networks