Thresholded spectral algorithms for sparse approximations
From MaRDI portal
Publication:5267950
DOI10.1142/S0219530517500026zbMATH Open1409.68232OpenAlexW2580921317MaRDI QIDQ5267950FDOQ5267950
Xin Guo, Zheng-Chu Guo, Ding-Xuan Zhou, Dao-Hong Xiang
Publication date: 13 June 2017
Published in: Analysis and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1142/s0219530517500026
Recommendations
Nonparametric regression and quantile regression (62G08) Learning and adaptive systems in artificial intelligence (68T05) Rate of convergence, degree of approximation (41A25)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Learning Theory
- On early stopping in gradient descent learning
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- Optimal rates for the regularized least-squares algorithm
- Title not available (Why is that?)
- Leave-One-Out Bounds for Kernel Methods
- Concentration estimates for learning with unbounded sampling
- Learning with sample dependent hypothesis spaces
- Learning rates of least-square regularized regression
- Learning theory estimates via integral operators and their approximations
- Title not available (Why is that?)
- Spectral Algorithms for Supervised Learning
- CROSS-VALIDATION BASED ADAPTATION FOR REGULARIZATION OPERATORS IN LEARNING THEORY
- Model selection for regularized least-squares algorithm in learning theory
- On regularization algorithms in learning theory
- Regularization schemes for minimum error entropy principle
- ONLINE LEARNING WITH MARKOV SAMPLING
- Regularization in kernel learning
- An empirical feature-based learning algorithm producing sparse approximations
- Kernel ridge vs. principal component regression: minimax bounds and the qualification of regularization operators
- Title not available (Why is that?)
Cited In (38)
- Optimal $k$-Thresholding Algorithms for Sparse Optimization Problems
- Functional linear regression with Huber loss
- Online pairwise learning algorithms with convex loss functions
- Analysis of singular value thresholding algorithm for matrix completion
- Convergence of online mirror descent
- Distributed learning with indefinite kernels
- On meshfree numerical differentiation
- Reproducing kernels of Sobolev spaces on ℝd and applications to embedding constants and tractability
- Rates of approximation by ReLU shallow neural networks
- Thresholded Basis Pursuit: LP Algorithm for Order-Wise Optimal Support Recovery for Sparse and Approximately Sparse Signals From Noisy Random Measurements
- Title not available (Why is that?)
- Moduli of smoothness, \(K\)-functionals and Jackson-type inequalities associated with Kernel function approximation in learning theory
- Gradient descent for robust kernel-based regression
- Neural network interpolation operators optimized by Lagrange polynomial
- Sparse additive machine with ramp loss
- On Szász-Durrmeyer type modification using Gould Hopper polynomials
- Sufficient ensemble size for random matrix theory-based handling of singular covariance matrices
- Theory of deep convolutional neural networks. III: Approximating radial functions
- Title not available (Why is that?)
- Deep distributed convolutional neural networks: Universality
- Distributed kernel gradient descent algorithm for minimum error entropy principle
- Title not available (Why is that?)
- Dunkl analouge of Szász Schurer Beta bivariate operators
- Analysis of regularized Nyström subsampling for regression functions of low smoothness
- Learning Theory of Randomized Sparse Kaczmarz Method
- Averaging versus voting: a comparative study of strategies for distributed classification
- Theory of deep convolutional neural networks: downsampling
- Faster convergence of a randomized coordinate descent method for linearly constrained optimization problems
- Theory of deep convolutional neural networks. II: Spherical analysis
- Fast thresholding algorithms with feedbacks for sparse signal recovery
- Optimal rates for coefficient-based regularized regression
- Approximation on variable exponent spaces by linear integral operators
- Chebyshev type inequality for stochastic Bernstein polynomials
- Optimal learning rates for distribution regression
- Convergence on sequences of Szász-Jakimovski-Leviatan type operators and related results
- Title not available (Why is that?)
- Accelerate stochastic subgradient method by leveraging local growth condition
- Approximation of functions from Korobov spaces by shallow neural networks
This page was built for publication: Thresholded spectral algorithms for sparse approximations
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5267950)