Universal Approximation Property of Neural Ordinary Differential Equations

From MaRDI portal
Publication:6355293

arXiv2012.02414MaRDI QIDQ6355293FDOQ6355293


Authors: Takeshi Teshima, Koichi Tojo, Masahiro Ikeda, Isao Ishikawa, Kenta Oono Edit this on Wikidata


Publication date: 4 December 2020

Abstract: Neural ordinary differential equations (NODEs) is an invertible neural network architecture promising for its free-form Jacobian and the availability of a tractable Jacobian determinant estimator. Recently, the representation power of NODEs has been partly uncovered: they form an Lp-universal approximator for continuous maps under certain conditions. However, the Lp-universality may fail to guarantee an approximation for the entire input domain as it may still hold even if the approximator largely differs from the target function on a small region of the input space. To further uncover the potential of NODEs, we show their stronger approximation property, namely the sup-universality for approximating a large class of diffeomorphisms. It is shown by leveraging a structure theorem of the diffeomorphism group, and the result complements the existing literature by establishing a fairly large set of mappings that NODEs can approximate with a stronger guarantee.













This page was built for publication: Universal Approximation Property of Neural Ordinary Differential Equations

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6355293)