An analysis of training and generalization errors in shallow and deep networks
From MaRDI portal
Publication:2185668
DOI10.1016/J.NEUNET.2019.08.028zbMATH Open1434.68513arXiv1802.06266OpenAlexW2972277540WikidataQ90416114 ScholiaQ90416114MaRDI QIDQ2185668FDOQ2185668
Authors: T. Poggio, H. N. Mhaskar
Publication date: 5 June 2020
Published in: Neural Networks (Search for Journal in Brave)
Abstract: This paper is motivated by an open problem around deep networks, namely, the apparent absence of over-fitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we analyze this phenomenon in the case of regression problems when each unit evaluates a periodic activation function. We argue that the minimal expected value of the square loss is inappropriate to measure the generalization error in approximation of compositional functions in order to take full advantage of the compositional structure. Instead, we measure the generalization error in the sense of maximum loss, and sometimes, as a pointwise error. We give estimates on exactly how many parameters ensure both zero training error as well as a good generalization error. We prove that a solution of a regularization problem is guaranteed to yield a good training error as well as a good generalization error and estimate how much error to expect at which test data.
Full work available at URL: https://arxiv.org/abs/1802.06266
Recommendations
- scientific article; zbMATH DE number 7387621
- Full error analysis for the training of deep neural networks
- Generalization Error Analysis of Neural Networks with Gradient Based Regularization
- Generalization Error in Deep Learning
- scientific article; zbMATH DE number 1728675
- Deep vs. shallow networks: an approximation theory perspective
- Overall error analysis for the training of deep neural networks via stochastic gradient descent with random initialisation
- Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness
- High-dimensional dynamics of generalization error in neural networks
- Error bounds for approximations with deep ReLU networks
Cites Work
- Deep learning
- Title not available (Why is that?)
- Degree of approximation by neural and translation networks with a single hidden layer
- Approximation with interpolatory constraints
- Title not available (Why is that?)
- Minimum Sobolev norm interpolation with trigonometric polynomials on the torus
- Eignets for function approximation on manifolds
- Localized linear polynomial operators and quadrature formulas on the sphere
- On some convergence properties of the interpolation polynomials
- Sur l'approximation d'une fonction périodique et de ses dérivées successives par un polynôme trigonométrique et par ses dérivées successives
- Title not available (Why is that?)
- Title not available (Why is that?)
- Deep vs. shallow networks: an approximation theory perspective
- Function approximation with zonal function networks with activation functions analogous to the rectified linear unit functions
- Robust Large Margin Deep Neural Networks
- Applications of classical approximation theory to periodic basis function networks and computational harmonic analysis
Cited In (14)
- Over-parametrized deep neural networks minimizing the empirical risk do not generalize well
- Learning the mapping \(\mathbf{x}\mapsto \sum\limits_{i=1}^d x_i^2\): the cost of finding the needle in a haystack
- Theoretical issues in deep networks
- Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness
- Overparameterization and generalization error: weighted trigonometric interpolation
- Strong overall error analysis for the training of artificial neural networks via random initializations
- Scaling description of generalization with number of parameters in deep learning
- Generalization Error in Deep Learning
- Smaller generalization error derived for a deep residual neural network compared with shallow networks
- A direct approach for function approximation on data defined manifolds
- Error Analysis and Improving the Accuracy of Winograd Convolution for Deep Neural Networks
- Applied harmonic analysis and data processing. Abstracts from the workshop held March 25--31, 2018
- A jamming transition from under- to over-parametrization affects generalization in deep learning
- Full error analysis for the training of deep neural networks
This page was built for publication: An analysis of training and generalization errors in shallow and deep networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2185668)