Thresholded spectral algorithms for sparse approximations

From MaRDI portal
Publication:5267950


DOI10.1142/S0219530517500026zbMath1409.68232OpenAlexW2580921317MaRDI QIDQ5267950

Xin Guo, Zheng-Chu Guo, Ding-Xuan Zhou, Dao-Hong Xiang

Publication date: 13 June 2017

Published in: Analysis and Applications (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1142/s0219530517500026



Related Items

Unnamed Item, Deep distributed convolutional neural networks: Universality, Gradient descent for robust kernel-based regression, Approximation on variable exponent spaces by linear integral operators, Distributed kernel gradient descent algorithm for minimum error entropy principle, Averaging versus voting: a comparative study of strategies for distributed classification, Theory of deep convolutional neural networks: downsampling, Theory of deep convolutional neural networks. III: Approximating radial functions, Rates of approximation by ReLU shallow neural networks, Neural network interpolation operators optimized by Lagrange polynomial, Convergence on sequences of Szász-Jakimovski-Leviatan type operators and related results, Dunkl analouge of Szász Schurer Beta bivariate operators, On Szász-Durrmeyer type modification using Gould Hopper polynomials, Reproducing kernels of Sobolev spaces on ℝd and applications to embedding constants and tractability, On meshfree numerical differentiation, Faster convergence of a randomized coordinate descent method for linearly constrained optimization problems, Sufficient ensemble size for random matrix theory-based handling of singular covariance matrices, Chebyshev type inequality for stochastic Bernstein polynomials, Convergence of online mirror descent, Learning Theory of Randomized Sparse Kaczmarz Method, Optimal learning rates for distribution regression, Online pairwise learning algorithms with convex loss functions, Theory of deep convolutional neural networks. II: Spherical analysis, Accelerate stochastic subgradient method by leveraging local growth condition, Analysis of regularized Nyström subsampling for regression functions of low smoothness, Distributed learning with indefinite kernels, Sparse additive machine with ramp loss, Optimal rates for coefficient-based regularized regression, Analysis of singular value thresholding algorithm for matrix completion, Unnamed Item, Functional linear regression with Huber loss, Unnamed Item, Unnamed Item



Cites Work