Generalization Error Analysis of Neural Networks with Gradient Based Regularization
From MaRDI portal
Publication:5045671
Cites work
- A Nonlinear Primal-Dual Method for Total Variation-Based Image Restoration
- A multiphase level set framework for image segmentation using the Mumford and Shah model
- Active contours without edges
- Adaptive activation functions accelerate convergence in deep and physics-informed neural networks
- Adversarial defense via the data-dependent activation, total variation minimization, and adversarial training
- An algorithm for total variation minimization and applications
- Approximation by superpositions of a sigmoidal function
- Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF, vectorial TV, and high order models
- DGM: a deep learning algorithm for solving partial differential equations
- Deep Nitsche Method: Deep Ritz Method with Essential Boundary Conditions
- Deep learning
- Deep learning: an introduction for applied mathematicians
- Hidden physics models: machine learning of nonlinear partial differential equations
- High-dimensional statistics. A non-asymptotic viewpoint
- Machine learning and computational mathematics
- Multi-Scale Deep Neural Network (MscaleDNN) Methods for Oscillatory Stokes Flows in Complex Domains
- Multi-scale deep neural network (MscaleDNN) for solving Poisson-Boltzmann equation in complex domains
- Multilayer feedforward networks are universal approximators
- Nonlinear total variation based noise removal algorithms
- Partial differential equation regularization for supervised machine learning
- Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations
- Solving high-dimensional partial differential equations using deep learning
- The Split Bregman Method for L1-Regularized Problems
- The deep Ritz method: a deep learning-based numerical algorithm for solving variational problems
- Universal approximation bounds for superpositions of a sigmoidal function
- Universality of deep convolutional neural networks
Cited in
(7)- Generalization Error Analysis of Neural networks with Gradient Based Regularization
- Block-regularized repeated learning-testing for estimating generalization error
- An analysis of training and generalization errors in shallow and deep networks
- A new method to compute the blood flow equations using the physics-informed neural operator
- Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness
- Generalization Error in Deep Learning
- High-dimensional dynamics of generalization error in neural networks
This page was built for publication: Generalization Error Analysis of Neural Networks with Gradient Based Regularization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5045671)