A Theoretical Analysis of Deep Neural Networks and Parametric PDEs
From MaRDI portal
Publication:6316459
Learning and adaptive systems in artificial intelligence (68T05) Numerical analysis (65-XX) Finite element, Rayleigh-Ritz and Galerkin methods for boundary value problems involving PDEs (65N30) Theoretical approximation in context of PDEs (35A35) Rate of convergence, degree of approximation (41A25) Elliptic equations and elliptic systems (35J99) Approximation by arbitrary nonlinear expressions; widths and entropy (41A46)
Abstract: We derive upper bounds on the complexity of ReLU neural networks approximating the solution maps of parametric partial differential equations. In particular, without any knowledge of its concrete shape, we use the inherent low-dimensionality of the solution manifold to obtain approximation rates which are significantly superior to those provided by classical neural network approximation results. Concretely, we use the existence of a small reduced basis to construct, for a large variety of parametric partial differential equations, neural networks that yield approximations of the parametric solution maps in such a way that the sizes of these networks essentially only depend on the size of the reduced basis.
This page was built for publication: A Theoretical Analysis of Deep Neural Networks and Parametric PDEs
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6316459)