Computing Lyapunov functions using deep neural networks

From MaRDI portal
Publication:6340960

DOI10.3934/JCD.2021006arXiv2005.08965MaRDI QIDQ6340960FDOQ6340960


Authors: Lars Grüne Edit this on Wikidata


Publication date: 18 May 2020

Abstract: We propose a deep neural network architecture and a training algorithm for computing approximate Lyapunov functions of systems of nonlinear ordinary differential equations. Under the assumption that the system admits a compositional Lyapunov function, we prove that the number of neurons needed for an approximation of a Lyapunov function with fixed accuracy grows only polynomially in the state dimension, i.e., the proposed approach is able to overcome the curse of dimensionality. We show that nonlinear systems satisfying a small-gain condition admit compositional Lyapunov functions. Numerical examples in up to ten space dimensions illustrate the performance of the training scheme.













This page was built for publication: Computing Lyapunov functions using deep neural networks

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6340960)