Thresholded spectral algorithms for sparse approximations
From MaRDI portal
Publication:5267950
Recommendations
Cites work
- scientific article; zbMATH DE number 5957408 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- scientific article; zbMATH DE number 936298 (Why is no real title available?)
- An empirical feature-based learning algorithm producing sparse approximations
- Concentration estimates for learning with unbounded sampling
- Cross-validation based adaptation for regularization operators in learning theory
- Kernel ridge vs. principal component regression: minimax bounds and the qualification of regularization operators
- Learning Theory
- Learning from examples as an inverse problem
- Learning rates of least-square regularized regression
- Learning theory estimates via integral operators and their approximations
- Learning with sample dependent hypothesis spaces
- Leave-One-Out Bounds for Kernel Methods
- Model selection for regularized least-squares algorithm in learning theory
- ONLINE LEARNING WITH MARKOV SAMPLING
- On early stopping in gradient descent learning
- On regularization algorithms in learning theory
- Optimal rates for the regularized least-squares algorithm
- Regularization in kernel learning
- Regularization schemes for minimum error entropy principle
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- Sparsity and error analysis of empirical feature-based regularization schemes
- Spectral Algorithms for Supervised Learning
Cited in
(39)- Reproducing kernels of Sobolev spaces on \(\mathbb{R}^d\) and applications to embedding constants and tractability
- Analysis of singular value thresholding algorithm for matrix completion
- Averaging versus voting: a comparative study of strategies for distributed classification
- Analysis of regularized Nyström subsampling for regression functions of low smoothness
- On Szász-Durrmeyer type modification using Gould Hopper polynomials
- Theory of deep convolutional neural networks. II: Spherical analysis
- Convergence of online mirror descent
- Theory of deep convolutional neural networks. III: Approximating radial functions
- Optimal rates for coefficient-based regularized regression
- Dunkl analouge of Szász Schurer Beta bivariate operators
- Chebyshev type inequality for stochastic Bernstein polynomials
- Optimal learning rates for distribution regression
- Theory of deep convolutional neural networks: downsampling
- Distributed learning with indefinite kernels
- Online pairwise learning algorithms with convex loss functions
- Consistency analysis of spectral regularization algorithms
- Faster convergence of a randomized coordinate descent method for linearly constrained optimization problems
- Gradient descent for robust kernel-based regression
- Convergence on sequences of Szász-Jakimovski-Leviatan type operators and related results
- Approximation of functions from Korobov spaces by shallow neural networks
- Neural network interpolation operators optimized by Lagrange polynomial
- Sparse kernel regression with coefficient-based \(\ell_q\)-regularization
- Optimal $k$-Thresholding Algorithms for Sparse Optimization Problems
- Distributed kernel gradient descent algorithm for minimum error entropy principle
- Sufficient ensemble size for random matrix theory-based handling of singular covariance matrices
- Learning theory of distributed regression with bias corrected regularization kernel network
- Thresholded Basis Pursuit: LP Algorithm for Order-Wise Optimal Support Recovery for Sparse and Approximately Sparse Signals From Noisy Random Measurements
- Deep distributed convolutional neural networks: universality
- Distributed learning with regularized least squares
- Boosted kernel ridge regression: optimal learning rates and early stopping
- Functional linear regression with Huber loss
- Sparse additive machine with ramp loss
- Approximation on variable exponent spaces by linear integral operators
- Rates of approximation by ReLU shallow neural networks
- Learning theory of randomized sparse Kaczmarz method
- Fast thresholding algorithms with feedbacks for sparse signal recovery
- On meshfree numerical differentiation
- Moduli of smoothness, \(K\)-functionals and Jackson-type inequalities associated with Kernel function approximation in learning theory
- Accelerate stochastic subgradient method by leveraging local growth condition
This page was built for publication: Thresholded spectral algorithms for sparse approximations
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5267950)