Thresholded spectral algorithms for sparse approximations
From MaRDI portal
Publication:5267950
DOI10.1142/S0219530517500026zbMath1409.68232OpenAlexW2580921317MaRDI QIDQ5267950
Xin Guo, Zheng-Chu Guo, Ding-Xuan Zhou, Dao-Hong Xiang
Publication date: 13 June 2017
Published in: Analysis and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1142/s0219530517500026
Nonparametric regression and quantile regression (62G08) Learning and adaptive systems in artificial intelligence (68T05) Rate of convergence, degree of approximation (41A25)
Related Items
Unnamed Item, Deep distributed convolutional neural networks: Universality, Gradient descent for robust kernel-based regression, Approximation on variable exponent spaces by linear integral operators, Distributed kernel gradient descent algorithm for minimum error entropy principle, Averaging versus voting: a comparative study of strategies for distributed classification, Theory of deep convolutional neural networks: downsampling, Theory of deep convolutional neural networks. III: Approximating radial functions, Rates of approximation by ReLU shallow neural networks, Neural network interpolation operators optimized by Lagrange polynomial, Convergence on sequences of Szász-Jakimovski-Leviatan type operators and related results, Dunkl analouge of Szász Schurer Beta bivariate operators, On Szász-Durrmeyer type modification using Gould Hopper polynomials, Reproducing kernels of Sobolev spaces on ℝd and applications to embedding constants and tractability, On meshfree numerical differentiation, Faster convergence of a randomized coordinate descent method for linearly constrained optimization problems, Sufficient ensemble size for random matrix theory-based handling of singular covariance matrices, Chebyshev type inequality for stochastic Bernstein polynomials, Convergence of online mirror descent, Learning Theory of Randomized Sparse Kaczmarz Method, Optimal learning rates for distribution regression, Online pairwise learning algorithms with convex loss functions, Theory of deep convolutional neural networks. II: Spherical analysis, Accelerate stochastic subgradient method by leveraging local growth condition, Analysis of regularized Nyström subsampling for regression functions of low smoothness, Distributed learning with indefinite kernels, Sparse additive machine with ramp loss, Optimal rates for coefficient-based regularized regression, Analysis of singular value thresholding algorithm for matrix completion, Unnamed Item, Functional linear regression with Huber loss, Unnamed Item, Unnamed Item
Cites Work
- An empirical feature-based learning algorithm producing sparse approximations
- Kernel ridge vs. principal component regression: minimax bounds and the qualification of regularization operators
- Model selection for regularized least-squares algorithm in learning theory
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- Regularization in kernel learning
- On regularization algorithms in learning theory
- Concentration estimates for learning with unbounded sampling
- Optimal rates for the regularized least-squares algorithm
- Learning with sample dependent hypothesis spaces
- Learning rates of least-square regularized regression
- Learning theory estimates via integral operators and their approximations
- On early stopping in gradient descent learning
- Learning Theory
- Spectral Algorithms for Supervised Learning
- CROSS-VALIDATION BASED ADAPTATION FOR REGULARIZATION OPERATORS IN LEARNING THEORY
- ONLINE LEARNING WITH MARKOV SAMPLING
- Leave-One-Out Bounds for Kernel Methods
- Regularization schemes for minimum error entropy principle
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item