Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness
DOI10.1016/J.NEUNET.2020.06.024zbMATH Open1475.68315arXiv1905.11427OpenAlexW3039204554WikidataQ97517817 ScholiaQ97517817MaRDI QIDQ2057701FDOQ2057701
Authors: Pengzhan Jin, Lu Lu, Yifa Tang, George Em Karniadakis
Publication date: 7 December 2021
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1905.11427
Recommendations
- An analysis of training and generalization errors in shallow and deep networks
- Generalization Error Analysis of Neural Networks with Gradient Based Regularization
- Generalization Error in Deep Learning
- High-dimensional dynamics of generalization error in neural networks
- scientific article; zbMATH DE number 7387621
- scientific article; zbMATH DE number 759415
- Overall error analysis for the training of deep neural networks via stochastic gradient descent with random initialisation
- Publication:4955300
- Approximation error for neural network operators by an averaged modulus of smoothness
- scientific article; zbMATH DE number 1728675
neural networksgeneralization errordata distributionlearnabilitycover complexityneural network smoothness
Cites Work
- The elements of statistical learning. Data mining, inference, and prediction
- 10.1162/153244303321897690
- Multilayer feedforward networks are universal approximators
- Approximation by superpositions of a sigmoidal function
- Balls in \(\mathbb{R}^k\) do not cut all subsets of \(k+2\) points
- Large-scale machine learning with stochastic gradient descent
- Probability and computing. Randomization and probabilistic techniques in algorithms and data analysis
- Robust Large Margin Deep Neural Networks
- The implicit bias of gradient descent on separable data
- On the information bottleneck theory of deep learning
Cited In (11)
- Physics-informed neural networks with hard constraints for inverse design
- Deep learning architectures for nonlinear operator functions and nonlinear inverse problems
- Applications of finite difference-based physics-informed neural networks to steady incompressible isothermal and thermal flows
- An analysis of training and generalization errors in shallow and deep networks
- High-dimensional dynamics of generalization error in neural networks
- Rademacher complexity and the generalization error of residual networks
- Approximation capabilities of measure-preserving neural networks
- Generalization Error in Deep Learning
- Quantification on the generalization performance of deep neural network with Tychonoff separation axioms
- Reliable extrapolation of deep neural operators informed by physics or sparse observations
- Mosaic flows: a transferable deep learning framework for solving PDEs on unseen domains
Uses Software
This page was built for publication: Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2057701)