What Kinds of Functions Do Deep Neural Networks Learn? Insights from Variational Spline Theory
From MaRDI portal
Publication:5071660
DOI10.1137/21M1418642OpenAlexW3162775926MaRDI QIDQ5071660
Publication date: 22 April 2022
Published in: SIAM Journal on Mathematics of Data Science (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2105.03361
Learning and adaptive systems in artificial intelligence (68T05) Signal theory (characterization, reconstruction, filtering, etc.) (94A12) Neural nets applied to problems in time-dependent statistical mechanics (82C32) Spaces of measures (46E27) Linear operators and ill-posed problems, regularization (47A52)
Related Items
Uniform approximation rates and metric entropy of shallow neural networks, Explicit representations for Banach subspaces of Lizorkin distributions, Characterization of the variation spaces corresponding to shallow neural networks, Connections between numerical algorithms for PDEs and neural networks, Measuring Complexity of Learning Schemes Using Hessian-Schatten Total Variation, Greedy training algorithms for neural networks and applications to PDEs
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Vector-valued Lg-splines. I: Interpolating splines
- Spline solutions to L\(^1\) extremal problems in one and several variables
- Locally adaptive regression splines
- Convex functional analysis
- A unifying representer theorem for inverse problems and machine learning
- Some results on Tchebycheffian spline functions and stochastic processes
- Convex optimization in sums of Banach spaces
- Splines Are Universal Solutions of Linear Inverse Problems with Generalized TV Regularization
- Deep Convolutional Neural Network for Inverse Problems in Imaging
- Deep Neural Networks With Trainable Activations and Controlled Lipschitz Constant
- Breaking the Curse of Dimensionality with Convex Neural Networks
- A representer theorem for deep kernel learning
- Theory of Reproducing Kernels