Deep distributed convolutional neural networks: Universality

From MaRDI portal
Revision as of 11:28, 7 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:4560301

DOI10.1142/S0219530518500124zbMath1442.68214WikidataQ130124717 ScholiaQ130124717MaRDI QIDQ4560301

Ding-Xuan Zhou

Publication date: 10 December 2018

Published in: Analysis and Applications (Search for Journal in Brave)




Related Items (61)

Compressed data separation via unconstrained l1-split analysisA mesh-free method using piecewise deep neural network for elliptic interface problemsApproximation properties of deep ReLU CNNsModified proximal symmetric ADMMs for multi-block separable convex optimization with linear constraintsUnnamed ItemRates of convergence of randomized Kaczmarz algorithms in Hilbert spacesNeural network interpolation operators activated by smooth ramp functionsLearning rate of distribution regression with dependent samplesWeighted random sampling and reconstruction in general multivariate trigonometric polynomial spacesDistributed semi-supervised regression learning with coefficient regularizationTheory of deep convolutional neural networks: downsamplingTheory of deep convolutional neural networks. III: Approximating radial functionsApproximating smooth and sparse functions by deep neural networks: optimal approximation rates and saturationRates of approximation by ReLU shallow neural networksNeural network interpolation operators optimized by Lagrange polynomialProbabilistic robustness estimates for feed-forward neural networksLearning rates for the kernel regularized regression with a differentiable strongly convex lossApproximation by multivariate max-product Kantorovich-type operators and learning rates of least-squares regularized regressionOn the K-functional in learning theoryConvergence on sequences of Szász-Jakimovski-Leviatan type operators and related resultsConvergence theorems in Orlicz and Bögel continuous functions spaces by means of Kantorovich discrete type sampling operatorsSome new inequalities and numerical results of bivariate Bernstein-type operator including Bézier basis and its GBS operatorRate of convergence of Stancu type modified \(q\)-Gamma operators for functions with derivatives of bounded variationError analysis of kernel regularized pairwise learning with a strongly convex lossDunkl analouge of Szász Schurer Beta bivariate operatorsSignReLU neural network and its approximation abilityNeural network interpolation operators of multivariate functionsError bounds for approximations using multichannel deep convolutional neural networks with downsamplingOn Szász-Durrmeyer type modification using Gould Hopper polynomialsDeep learning theory of distribution regression with CNNsApproximation of nonlinear functionals using deep ReLU networksDeep learning via dynamical systems: an approximation perspectiveSketching with Spherical Designs for Noisy Data Fitting on SpheresQuadratic Neural Networks for Solving Inverse ProblemsBias corrected regularization kernel method in rankingPhaseMax: Stable guarantees from noisy sub-Gaussian measurementsApproximation rates for neural networks with general activation functionsApproximation Properties of Ridge Functions and Extreme Learning MachinesUnnamed ItemOptimal learning rates for distribution regressionDistributed regularized least squares with flexible Gaussian kernelsUnnamed ItemUniversality of deep convolutional neural networksEquivalence of approximation by convolutional neural networks and fully-connected networksOnline regularized pairwise learning with least squares lossTheory of deep convolutional neural networks. II: Spherical analysisMgNet: a unified framework of multigrid and convolutional neural networkRates of approximation by neural network interpolation operatorsStochastic Markov gradient descent and training low-bit neural networksDeep neural networks for rotation-invariance approximation and learningRobust randomized optimization with k nearest neighborsLearning under \((1 + \epsilon)\)-moment conditionsBalanced joint maximum mean discrepancy for deep transfer learningOn the speed of uniform convergence in Mercer's theoremDistributed Filtered Hyperinterpolation for Noisy Data on the SphereLearning rates for partially linear support vector machine in high dimensionsApproximation by max-product sampling Kantorovich operators with generalized kernelsFunctional linear regression with Huber lossApproximation of functions from korobov spaces by deep convolutional neural networksApproximating functions with multi-features by deep convolutional neural networksError analysis of the moving least-squares regression learning algorithm with β-mixing and non-identical sampling


Uses Software


Cites Work


This page was built for publication: Deep distributed convolutional neural networks: Universality