Topological properties of the set of functions generated by neural networks of fixed size
From MaRDI portal
Publication:2031060
DOI10.1007/s10208-020-09461-0OpenAlexW3025750359MaRDI QIDQ2031060
Mones Raslan, Felix Voigtlaender, Philipp Petersen
Publication date: 8 June 2021
Published in: Foundations of Computational Mathematics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1806.08459
Learning and adaptive systems in artificial intelligence (68T05) Variants of convex sets (star-shaped, ((m, n))-convex, etc.) (52A30) Connections of general topology with other structures, applications (54H99) Computer science (68-XX)
Related Items (21)
On PDE Characterization of Smooth Hierarchical Functions Computed by Neural Networks ⋮ Analysis of the Generalization Error: Empirical Risk Minimization over Deep Artificial Neural Networks Overcomes the Curse of Dimensionality in the Numerical Approximation of Black--Scholes Partial Differential Equations ⋮ A Proof that Artificial Neural Networks Overcome the Curse of Dimensionality in the Numerical Approximation of Black–Scholes Partial Differential Equations ⋮ Landscape analysis for shallow neural networks: complete classification of critical points for affine target functions ⋮ Full error analysis for the training of deep neural networks ⋮ A machine learning approach to portfolio pricing and risk management for high‐dimensional problems ⋮ Some elliptic second order problems and neural network solutions: existence and error estimates ⋮ Neural control of discrete weak formulations: Galerkin, least squares \& minimal-residual methods with quasi-optimal weights ⋮ Space-time error estimates for deep neural network approximations for differential equations ⋮ Adversarial deep energy method for solving saddle point problems involving dielectric elastomers ⋮ A deep double Ritz method (\(\mathrm{D^2RM}\)) for solving partial differential equations using neural networks ⋮ Limitations of neural network training due to numerical instability of backpropagation ⋮ Invariant spectral foliations with applications to model order reduction and synthesis ⋮ Convolution hierarchical deep-learning neural networks (C-HiDeNN): finite elements, isogeometric analysis, tensor decomposition, and beyond ⋮ Data-driven reduced order models using invariant foliations, manifolds and autoencoders ⋮ Parameter identifiability of a deep feedforward ReLU neural network ⋮ The universal approximation property. Characterization, construction, representation, and existence ⋮ Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation ⋮ On the approximation of rough functions with deep neural networks ⋮ Galerkin Neural Networks: A Framework for Approximating Variational Equations with Error Control ⋮ Best \(k\)-layer neural network approximations
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations
- Networks and the best approximation property
- Hardness results for neural network approximation problems
- Multilayer feedforward networks are universal approximators
- The Deep Ritz Method: a deep learning-based numerical algorithm for solving variational problems
- Approximation properties of a multilayered feedforward artificial neural network
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Error bounds for approximations with deep ReLU networks
- On the mathematical foundations of learning
- Introduction to Topological Manifolds
- Best approximation by ridge functions in L p -spaces
- Classical Fourier Analysis
- Universal approximation bounds for superpositions of a sigmoidal function
- The Structure of Non-Enumerable Sets of Points
- 10.1162/153244303321897690
- Neural Network Learning
- A mean field view of the landscape of two-layer neural networks
- Optimal Approximation with Sparsely Connected Deep Neural Networks
- A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play
- Breaking the Curse of Dimensionality with Convex Neural Networks
- Measure Theory
- Translation Invariant Subspaces of Finite Dimension
- A logical calculus of the ideas immanent in nervous activity
- Approximation by superpositions of a sigmoidal function
This page was built for publication: Topological properties of the set of functions generated by neural networks of fixed size