Deep distributed convolutional neural networks: Universality
From MaRDI portal
Publication:4560301
DOI10.1142/S0219530518500124zbMath1442.68214WikidataQ130124717 ScholiaQ130124717MaRDI QIDQ4560301
Publication date: 10 December 2018
Published in: Analysis and Applications (Search for Journal in Brave)
universalityconvolutional neural networksdeep learningdeep distributed convolutional neural networksfilter mask
Related Items (61)
Compressed data separation via unconstrained l1-split analysis ⋮ A mesh-free method using piecewise deep neural network for elliptic interface problems ⋮ Approximation properties of deep ReLU CNNs ⋮ Modified proximal symmetric ADMMs for multi-block separable convex optimization with linear constraints ⋮ Unnamed Item ⋮ Rates of convergence of randomized Kaczmarz algorithms in Hilbert spaces ⋮ Neural network interpolation operators activated by smooth ramp functions ⋮ Learning rate of distribution regression with dependent samples ⋮ Weighted random sampling and reconstruction in general multivariate trigonometric polynomial spaces ⋮ Distributed semi-supervised regression learning with coefficient regularization ⋮ Theory of deep convolutional neural networks: downsampling ⋮ Theory of deep convolutional neural networks. III: Approximating radial functions ⋮ Approximating smooth and sparse functions by deep neural networks: optimal approximation rates and saturation ⋮ Rates of approximation by ReLU shallow neural networks ⋮ Neural network interpolation operators optimized by Lagrange polynomial ⋮ Probabilistic robustness estimates for feed-forward neural networks ⋮ Learning rates for the kernel regularized regression with a differentiable strongly convex loss ⋮ Approximation by multivariate max-product Kantorovich-type operators and learning rates of least-squares regularized regression ⋮ On the K-functional in learning theory ⋮ Convergence on sequences of Szász-Jakimovski-Leviatan type operators and related results ⋮ Convergence theorems in Orlicz and Bögel continuous functions spaces by means of Kantorovich discrete type sampling operators ⋮ Some new inequalities and numerical results of bivariate Bernstein-type operator including Bézier basis and its GBS operator ⋮ Rate of convergence of Stancu type modified \(q\)-Gamma operators for functions with derivatives of bounded variation ⋮ Error analysis of kernel regularized pairwise learning with a strongly convex loss ⋮ Dunkl analouge of Szász Schurer Beta bivariate operators ⋮ SignReLU neural network and its approximation ability ⋮ Neural network interpolation operators of multivariate functions ⋮ Error bounds for approximations using multichannel deep convolutional neural networks with downsampling ⋮ On Szász-Durrmeyer type modification using Gould Hopper polynomials ⋮ Deep learning theory of distribution regression with CNNs ⋮ Approximation of nonlinear functionals using deep ReLU networks ⋮ Deep learning via dynamical systems: an approximation perspective ⋮ Sketching with Spherical Designs for Noisy Data Fitting on Spheres ⋮ Quadratic Neural Networks for Solving Inverse Problems ⋮ Bias corrected regularization kernel method in ranking ⋮ PhaseMax: Stable guarantees from noisy sub-Gaussian measurements ⋮ Approximation rates for neural networks with general activation functions ⋮ Approximation Properties of Ridge Functions and Extreme Learning Machines ⋮ Unnamed Item ⋮ Optimal learning rates for distribution regression ⋮ Distributed regularized least squares with flexible Gaussian kernels ⋮ Unnamed Item ⋮ Universality of deep convolutional neural networks ⋮ Equivalence of approximation by convolutional neural networks and fully-connected networks ⋮ Online regularized pairwise learning with least squares loss ⋮ Theory of deep convolutional neural networks. II: Spherical analysis ⋮ MgNet: a unified framework of multigrid and convolutional neural network ⋮ Rates of approximation by neural network interpolation operators ⋮ Stochastic Markov gradient descent and training low-bit neural networks ⋮ Deep neural networks for rotation-invariance approximation and learning ⋮ Robust randomized optimization with k nearest neighbors ⋮ Learning under \((1 + \epsilon)\)-moment conditions ⋮ Balanced joint maximum mean discrepancy for deep transfer learning ⋮ On the speed of uniform convergence in Mercer's theorem ⋮ Distributed Filtered Hyperinterpolation for Noisy Data on the Sphere ⋮ Learning rates for partially linear support vector machine in high dimensions ⋮ Approximation by max-product sampling Kantorovich operators with generalized kernels ⋮ Functional linear regression with Huber loss ⋮ Approximation of functions from korobov spaces by deep convolutional neural networks ⋮ Approximating functions with multi-features by deep convolutional neural networks ⋮ Error analysis of the moving least-squares regression learning algorithm with β-mixing and non-identical sampling
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Consistency analysis of an empirical minimum error entropy algorithm
- Unregularized online learning algorithms with general loss functions
- On best approximation by ridge functions
- Fundamentality of ridge functions
- Why does deep and cheap learning work so well?
- Multilayer feedforward networks are universal approximators
- Approximation properties of a multilayered feedforward artificial neural network
- Limitations of the approximation capabilities of neural networks with one hidden layer
- Deep vs. shallow networks: An approximation theory perspective
- Ten Lectures on Wavelets
- Universal approximation bounds for superpositions of a sigmoidal function
- Neural Networks for Localized Approximation
- Thresholded spectral algorithms for sparse approximations
- Learning theory of distributed spectral algorithms
- A Fast Learning Algorithm for Deep Belief Nets
- Approximation by superpositions of a sigmoidal function
This page was built for publication: Deep distributed convolutional neural networks: Universality