Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces
From MaRDI portal
Publication:550498
DOI10.1016/j.acha.2011.01.001zbMath1221.68201MaRDI QIDQ550498
Publication date: 11 July 2011
Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.acha.2011.01.001
learning theory; concentration estimate for error analysis; \(\ell ^{1}\)-regularizer and sparsity; \(\ell ^{2}\)-empirical covering number; data dependent hypothesis space
Related Items
Unnamed Item, Unnamed Item, Gradient descent for robust kernel-based regression, Distributed learning with partial coefficients regularization, Learning Theory of Randomized Sparse Kaczmarz Method, Nyström subsampling method for coefficient-based regularized regression, Sparse additive machine with ramp loss, Multikernel Regression with Sparsity Constraint, Parameter choices for sparse regularization with the ℓ1 norm *, Toward Efficient Ensemble Learning with Structure Constraints: Convergent Algorithms and Applications, Unnamed Item, Coefficient-based regularization network with variance loss for error, Learning Rates of lq Coefficient Regularization Learning with Gaussian Kernel, Regularized modal regression with data-dependent hypothesis spaces, Distributed learning with indefinite kernels, Error Analysis of Coefficient-Based Regularized Algorithm for Density-Level Detection, Learning rates for regularized least squares ranking algorithm, Learning with Convex Loss and Indefinite Kernels, Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery, Generalization Analysis of Fredholm Kernel Regularized Classifiers, Learning Rates for Classification with Gaussian Kernels, Optimality of the rescaled pure greedy learning algorithms, Fully corrective gradient boosting with squared hinge: fast learning rates and early stopping, Learning sparse and smooth functions by deep sigmoid nets, Statistical consistency of coefficient-based conditional quantile regression, Sharp learning rates of coefficient-based \(l^q\)-regularized regression with indefinite kernels, Learning with coefficient-based regularization and \(\ell^1\)-penalty, Least squares regression with \(l_1\)-regularizer in sum space, Quantile regression with \(\ell_1\)-regularization and Gaussian kernels, Convergence rate of the semi-supervised greedy algorithm, Constructive analysis for coefficient regularization regression algorithms, Error analysis for \(l^q\)-coefficient regularized moving least-square regression, Learning theory approach to a system identification problem involving atomic norm, Nonparametric regression using needlet kernels for spherical data, Distributed regression learning with coefficient regularization, Error analysis for coefficient-based regularized regression in additive models, A simpler approach to coefficient regularized support vector machines regression, Indefinite kernel network with \(l^q\)-norm regularization, Constructive analysis for least squares regression with generalized \(K\)-norm regularization, Coefficient-based \(l^q\)-regularized regression with indefinite kernels and unbounded sampling, On the convergence rate of kernel-based sequential greedy regression, Multi-task learning in vector-valued reproducing kernel Banach spaces with the \(\ell^1\) norm, Online pairwise learning algorithms with convex loss functions, Learning with correntropy-induced losses for regression with mixture of symmetric stable noise, Optimal rates for coefficient-based regularized regression, Approximation on variable exponent spaces by linear integral operators, Kernel-based sparse regression with the correntropy-induced loss, Distributed learning with multi-penalty regularization, On grouping effect of elastic net, Distributed semi-supervised regression learning with coefficient regularization, Modal additive models with data-driven structure identification, On reproducing kernel Banach spaces: generic definitions and unified framework of constructions, Half supervised coefficient regularization for regression learning with unbounded sampling, Reproducing Kernel Banach Spaces with the ℓ1 Norm II: Error Analysis for Regularized Least Square Regression, Learning by atomic norm regularization with polynomial kernels
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Learning by nonsymmetric kernels with data dependent spaces and \(\ell^1\)-regularizer
- Model selection for regularized least-squares algorithm in learning theory
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- Multi-kernel regularized classifiers
- Fast rates for support vector machines using Gaussian kernels
- The covering number in learning theory
- Weak convergence and empirical processes. With applications to statistics
- Optimal rates for the regularized least-squares algorithm
- Learning with sample dependent hypothesis spaces
- Learning rates of least-square regularized regression
- Learning theory estimates via integral operators and their approximations
- Local polynomial reproduction and moving least squares approximation
- SVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDS
- Support Vector Machines
- Capacity of reproducing kernel spaces in learning theory
- ONLINE LEARNING WITH MARKOV SAMPLING
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Shannon sampling and function reconstruction from point values
- Leave-One-Out Bounds for Kernel Methods
- Neural Network Learning
- On Complexity Issues of Online Learning Algorithms
- Sparsity and incoherence in compressive sampling
- Theory of Reproducing Kernels