Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces
From MaRDI portal
Publication:550498
DOI10.1016/j.acha.2011.01.001zbMath1221.68201MaRDI QIDQ550498
Publication date: 11 July 2011
Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.acha.2011.01.001
learning theory; concentration estimate for error analysis; \(\ell ^{1}\)-regularizer and sparsity; \(\ell ^{2}\)-empirical covering number; data dependent hypothesis space
Related Items
Learning Rates of lq Coefficient Regularization Learning with Gaussian Kernel, Error Analysis of Coefficient-Based Regularized Algorithm for Density-Level Detection, Statistical consistency of coefficient-based conditional quantile regression, Sharp learning rates of coefficient-based \(l^q\)-regularized regression with indefinite kernels, Learning with coefficient-based regularization and \(\ell^1\)-penalty, Least squares regression with \(l_1\)-regularizer in sum space, Quantile regression with \(\ell_1\)-regularization and Gaussian kernels, Convergence rate of the semi-supervised greedy algorithm, Constructive analysis for coefficient regularization regression algorithms, Learning theory approach to a system identification problem involving atomic norm, On the convergence rate of kernel-based sequential greedy regression, On grouping effect of elastic net, Half supervised coefficient regularization for regression learning with unbounded sampling, Reproducing Kernel Banach Spaces with the ℓ1 Norm II: Error Analysis for Regularized Least Square Regression, Learning by atomic norm regularization with polynomial kernels
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Learning by nonsymmetric kernels with data dependent spaces and \(\ell^1\)-regularizer
- Model selection for regularized least-squares algorithm in learning theory
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- Multi-kernel regularized classifiers
- Fast rates for support vector machines using Gaussian kernels
- The covering number in learning theory
- Weak convergence and empirical processes. With applications to statistics
- Optimal rates for the regularized least-squares algorithm
- Learning with sample dependent hypothesis spaces
- Learning rates of least-square regularized regression
- Learning theory estimates via integral operators and their approximations
- Local polynomial reproduction and moving least squares approximation
- SVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDS
- Support Vector Machines
- Capacity of reproducing kernel spaces in learning theory
- ONLINE LEARNING WITH MARKOV SAMPLING
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Shannon sampling and function reconstruction from point values
- Leave-One-Out Bounds for Kernel Methods
- Neural Network Learning
- On Complexity Issues of Online Learning Algorithms
- Sparsity and incoherence in compressive sampling
- Theory of Reproducing Kernels