Thresholded spectral algorithms for sparse approximations

From MaRDI portal
Revision as of 20:31, 8 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:5267950

DOI10.1142/S0219530517500026zbMath1409.68232OpenAlexW2580921317MaRDI QIDQ5267950

Xin Guo, Zheng-Chu Guo, Ding-Xuan Zhou, Dao-Hong Xiang

Publication date: 13 June 2017

Published in: Analysis and Applications (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1142/s0219530517500026




Related Items (33)

Unnamed ItemDeep distributed convolutional neural networks: UniversalityGradient descent for robust kernel-based regressionApproximation on variable exponent spaces by linear integral operatorsDistributed kernel gradient descent algorithm for minimum error entropy principleAveraging versus voting: a comparative study of strategies for distributed classificationTheory of deep convolutional neural networks: downsamplingTheory of deep convolutional neural networks. III: Approximating radial functionsRates of approximation by ReLU shallow neural networksNeural network interpolation operators optimized by Lagrange polynomialConvergence on sequences of Szász-Jakimovski-Leviatan type operators and related resultsDunkl analouge of Szász Schurer Beta bivariate operatorsOn Szász-Durrmeyer type modification using Gould Hopper polynomialsReproducing kernels of Sobolev spaces on ℝd and applications to embedding constants and tractabilityOn meshfree numerical differentiationFaster convergence of a randomized coordinate descent method for linearly constrained optimization problemsSufficient ensemble size for random matrix theory-based handling of singular covariance matricesChebyshev type inequality for stochastic Bernstein polynomialsConvergence of online mirror descentLearning Theory of Randomized Sparse Kaczmarz MethodOptimal learning rates for distribution regressionOnline pairwise learning algorithms with convex loss functionsTheory of deep convolutional neural networks. II: Spherical analysisAccelerate stochastic subgradient method by leveraging local growth conditionAnalysis of regularized Nyström subsampling for regression functions of low smoothnessDistributed learning with indefinite kernelsSparse additive machine with ramp lossOptimal rates for coefficient-based regularized regressionAnalysis of singular value thresholding algorithm for matrix completionUnnamed ItemFunctional linear regression with Huber lossUnnamed ItemUnnamed Item



Cites Work


This page was built for publication: Thresholded spectral algorithms for sparse approximations