Learning Rates of lq Coefficient Regularization Learning with Gaussian Kernel
From MaRDI portal
Publication:5175497
DOI10.1162/NECO_a_00641zbMath1410.62119arXiv1312.5465WikidataQ50646399 ScholiaQ50646399MaRDI QIDQ5175497
Jinshan Zeng, Jian Fang, Shao-Bo Lin, Zong Ben Xu
Publication date: 23 February 2015
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1312.5465
Nonparametric regression and quantile regression (62G08) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (8)
Nonparametric regression using needlet kernels for spherical data ⋮ Unnamed Item ⋮ Convergence of online pairwise regression learning with quadratic loss ⋮ Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery ⋮ Learning Rates for Classification with Gaussian Kernels ⋮ Nyström subsampling method for coefficient-based regularized regression ⋮ Distributed learning with indefinite kernels ⋮ Optimal rates for coefficient-based regularized regression
Cites Work
- Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces
- Least square regression with indefinite kernels and coefficient regularization
- Unified approach to coefficient-based regularized regression
- Regularization in kernel learning
- Learning and approximation by Gaussians on Riemannian manifolds
- Fast rates for support vector machines using Gaussian kernels
- Reproducing kernel Banach spaces with the \(\ell^1\) norm
- A distribution-free theory of nonparametric regression
- The covering number in learning theory
- Optimal rates for the regularized least-squares algorithm
- Learning with sample dependent hypothesis spaces
- Statistical performance of support vector machines
- Approximation methods for supervised learning
- Approximation and learning by greedy algorithms
- Approximation with polynomial kernels and SVM classifiers
- Learning rates of least-square regularized regression
- On the mathematical foundations of learning
- Least Square Regression with lp-Coefficient Regularization
- ONLINE REGRESSION WITH VARYING GAUSSIANS AND NON-IDENTICAL DISTRIBUTIONS
- Reproducing Kernel Banach Spaces with the ℓ1 Norm II: Error Analysis for Regularized Least Square Regression
- Learning Theory
- An Explicit Description of the Reproducing Kernel Hilbert Spaces of Gaussian RBF Kernels
- Iteratively reweighted least squares minimization for sparse recovery
- An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
- SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
- Some properties of Gaussian reproducing kernel Hilbert spaces and their implications for function approximation and learning theory
This page was built for publication: Learning Rates of lq Coefficient Regularization Learning with Gaussian Kernel