Generalization Error Analysis of Neural Networks with Gradient Based Regularization
From MaRDI portal
Publication:5045671
DOI10.4208/CICP.OA-2021-0211MaRDI QIDQ5045671FDOQ5045671
Authors: Ling-Feng Li, Jiang Yang, Xue-Cheng Tai
Publication date: 7 November 2022
Published in: Communications in Computational Physics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2107.02797
Cites Work
- Nonlinear total variation based noise removal algorithms
- DGM: a deep learning algorithm for solving partial differential equations
- Universal approximation bounds for superpositions of a sigmoidal function
- High-dimensional statistics. A non-asymptotic viewpoint
- Deep learning
- The Split Bregman Method for L1-Regularized Problems
- An algorithm for total variation minimization and applications
- Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF, vectorial TV, and high order models
- A Nonlinear Primal-Dual Method for Total Variation-Based Image Restoration
- Multilayer feedforward networks are universal approximators
- Active contours without edges
- Approximation by superpositions of a sigmoidal function
- A multiphase level set framework for image segmentation using the Mumford and Shah model
- Solving high-dimensional partial differential equations using deep learning
- Hidden physics models: machine learning of nonlinear partial differential equations
- The deep Ritz method: a deep learning-based numerical algorithm for solving variational problems
- Universality of deep convolutional neural networks
- Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations
- Machine learning and computational mathematics
- Adversarial defense via the data-dependent activation, total variation minimization, and adversarial training
- Adaptive activation functions accelerate convergence in deep and physics-informed neural networks
- Deep learning: an introduction for applied mathematicians
- Deep Nitsche Method: Deep Ritz Method with Essential Boundary Conditions
- Multi-scale deep neural network (MscaleDNN) for solving Poisson-Boltzmann equation in complex domains
- Multi-Scale Deep Neural Network (MscaleDNN) Methods for Oscillatory Stokes Flows in Complex Domains
- Partial differential equation regularization for supervised machine learning
Cited In (6)
- An analysis of training and generalization errors in shallow and deep networks
- High-dimensional dynamics of generalization error in neural networks
- Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness
- Generalization Error in Deep Learning
- A new method to compute the blood flow equations using the physics-informed neural operator
- Block-regularized repeated learning-testing for estimating generalization error
Uses Software
This page was built for publication: Generalization Error Analysis of Neural Networks with Gradient Based Regularization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5045671)