Universality of deep convolutional neural networks
From MaRDI portal
Publication:2300759
DOI10.1016/J.ACHA.2019.06.004zbMath1434.68531arXiv1805.10769OpenAlexW2963626582MaRDI QIDQ2300759
Publication date: 28 February 2020
Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1805.10769
Related Items (79)
Training a Neural-Network-Based Surrogate Model for Aerodynamic Optimisation Using a Gaussian Process ⋮ Effects of depth, width, and initialization: A convergence analysis of layer-wise training for deep linear neural networks ⋮ Generalization Error Analysis of Neural Networks with Gradient Based Regularization ⋮ Learning time-dependent PDEs with a linear and nonlinear separate convolutional neural network ⋮ DENSITY RESULTS BY DEEP NEURAL NETWORK OPERATORS WITH INTEGER WEIGHTS ⋮ Moreau envelope augmented Lagrangian method for nonconvex optimization with linear constraints ⋮ Approximation properties of deep ReLU CNNs ⋮ Unnamed Item ⋮ Rates of convergence of randomized Kaczmarz algorithms in Hilbert spaces ⋮ Neural network interpolation operators activated by smooth ramp functions ⋮ Weighted random sampling and reconstruction in general multivariate trigonometric polynomial spaces ⋮ Distributed semi-supervised regression learning with coefficient regularization ⋮ A note on the applications of one primary function in deep neural networks ⋮ Theory of deep convolutional neural networks: downsampling ⋮ Deep Neural Networks and PIDE Discretizations ⋮ Quantitative estimates for neural network operators implied by the asymptotic behaviour of the sigmoidal activation functions ⋮ A deep network construction that adapts to intrinsic dimensionality beyond the domain ⋮ Theory of deep convolutional neural networks. III: Approximating radial functions ⋮ Neural network approximation of continuous functions in high dimensions with applications to inverse problems ⋮ Approximating smooth and sparse functions by deep neural networks: optimal approximation rates and saturation ⋮ Rates of approximation by ReLU shallow neural networks ⋮ Deep learning methods for partial differential equations and related parameter identification problems ⋮ Universality of gradient descent neural network training ⋮ DNN-based speech watermarking resistant to desynchronization attacks ⋮ Neural network interpolation operators optimized by Lagrange polynomial ⋮ Convergence of deep convolutional neural networks ⋮ Probabilistic robustness estimates for feed-forward neural networks ⋮ Approximation Analysis of Convolutional Neural Networks ⋮ Approximation error for neural network operators by an averaged modulus of smoothness ⋮ Learning rates for the kernel regularized regression with a differentiable strongly convex loss ⋮ Approximation by multivariate max-product Kantorovich-type operators and learning rates of least-squares regularized regression ⋮ Universal regular conditional distributions via probabilistic transformers ⋮ On the K-functional in learning theory ⋮ Learning Optimal Multigrid Smoothers via Neural Networks ⋮ Scaling Up Bayesian Uncertainty Quantification for Inverse Problems Using Deep Neural Networks ⋮ Learning sparse and smooth functions by deep sigmoid nets ⋮ Error analysis of kernel regularized pairwise learning with a strongly convex loss ⋮ SignReLU neural network and its approximation ability ⋮ Neural network interpolation operators of multivariate functions ⋮ Self-Supervised Deep Learning for Image Reconstruction: A Langevin Monte Carlo Approach ⋮ Connections between Operator-Splitting Methods and Deep Neural Networks with Applications in Image Segmentation ⋮ Error bounds for approximations using multichannel deep convolutional neural networks with downsampling ⋮ Deep learning theory of distribution regression with CNNs ⋮ Approximation of nonlinear functionals using deep ReLU networks ⋮ The universal approximation theorem for complex-valued neural networks ⋮ Learning ability of interpolating deep convolutional neural networks ⋮ Lu decomposition and Toeplitz decomposition of a neural network ⋮ Deep learning for inverse problems with unknown operator ⋮ Two-layer networks with the \(\text{ReLU}^k\) activation function: Barron spaces and derivative approximation ⋮ Quadratic Neural Networks for Solving Inverse Problems ⋮ Computation and learning in high dimensions. Abstracts from the workshop held August 1--7, 2021 (hybrid meeting) ⋮ Approximation bounds for convolutional neural networks in operator learning ⋮ Deep Network Approximation for Smooth Functions ⋮ Unnamed Item ⋮ Butterfly-Net: Optimal Function Representation Based on Convolutional Neural Networks ⋮ Approximation of smooth functionals using deep ReLU networks ⋮ Learning with centered reproducing kernels ⋮ Approximation analysis of CNNs from a feature extraction view ⋮ Distributed regularized least squares with flexible Gaussian kernels ⋮ Online regularized pairwise learning with least squares loss ⋮ Theory of deep convolutional neural networks. II: Spherical analysis ⋮ Rates of approximation by neural network interpolation operators ⋮ Stochastic Markov gradient descent and training low-bit neural networks ⋮ Deep neural networks for rotation-invariance approximation and learning ⋮ Learning under \((1 + \epsilon)\)-moment conditions ⋮ Neural ODEs as the deep limit of ResNets with constant weights ⋮ Balanced joint maximum mean discrepancy for deep transfer learning ⋮ The construction and approximation of ReLU neural network operators ⋮ On the rate of convergence of image classifiers based on convolutional neural networks ⋮ Deep Network With Approximation Error Being Reciprocal of Width to Power of Square Root of Depth ⋮ Convolutional spectral kernel learning with generalization guarantees ⋮ Distributed Filtered Hyperinterpolation for Noisy Data on the Sphere ⋮ Learning rates for partially linear support vector machine in high dimensions ⋮ Approximation of functions from korobov spaces by deep convolutional neural networks ⋮ Error bounds for ReLU networks with depth and width parameters ⋮ Learnable Empirical Mode Decomposition based on Mathematical Morphology ⋮ Analysis of convolutional neural network image classifiers in a hierarchical max-pooling model with additional local pooling ⋮ Approximating functions with multi-features by deep convolutional neural networks ⋮ Spline representation and redundancies of one-dimensional ReLU neural network models
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Consistency analysis of an empirical minimum error entropy algorithm
- Multilayer feedforward networks are universal approximators
- Provable approximation properties for deep neural networks
- Approximation properties of a multilayered feedforward artificial neural network
- Limitations of the approximation capabilities of neural networks with one hidden layer
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Error bounds for approximations with deep ReLU networks
- Deep vs. shallow networks: An approximation theory perspective
- Ten Lectures on Wavelets
- Universal approximation bounds for superpositions of a sigmoidal function
- Deep distributed convolutional neural networks: Universality
- Approximation by Combinations of ReLU and Squared ReLU Ridge Functions With <inline-formula> <tex-math notation="LaTeX">$\ell^1$ </tex-math> </inline-formula> and <inline-formula> <tex-math notation="LaTeX">$\ell^0$ </tex-math> </inline-formula> Controls
- Shannon sampling and function reconstruction from point values
- Optimal Approximation with Sparsely Connected Deep Neural Networks
- Learning theory of distributed spectral algorithms
- A Fast Learning Algorithm for Deep Belief Nets
- Approximation by superpositions of a sigmoidal function
This page was built for publication: Universality of deep convolutional neural networks