Computing Lyapunov functions using deep neural networks

From MaRDI portal
Publication:6340960




Abstract: We propose a deep neural network architecture and a training algorithm for computing approximate Lyapunov functions of systems of nonlinear ordinary differential equations. Under the assumption that the system admits a compositional Lyapunov function, we prove that the number of neurons needed for an approximation of a Lyapunov function with fixed accuracy grows only polynomially in the state dimension, i.e., the proposed approach is able to overcome the curse of dimensionality. We show that nonlinear systems satisfying a small-gain condition admit compositional Lyapunov functions. Numerical examples in up to ten space dimensions illustrate the performance of the training scheme.











This page was built for publication: Computing Lyapunov functions using deep neural networks

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6340960)