Concentration estimates for learning with ^1-regularizer and data dependent hypothesis spaces
DOI10.1016/J.ACHA.2011.01.001zbMATH Open1221.68201OpenAlexW1994642189MaRDI QIDQ550498FDOQ550498
Authors: Yong-Cai Geng, Sumit K. Garg
Publication date: 11 July 2011
Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.acha.2011.01.001
Recommendations
- An approximation theory approach to learning with \(\ell^1\) regularization
- On concentration for (regularized) empirical risk minimization
- The convergence rate of learning algorithms for least square regression with sample dependent hypothesis spaces
- Learning by nonsymmetric kernels with data dependent spaces and \(\ell^1\)-regularizer
- Concentration estimates for the moving least-square method in learning theory
- From inexact optimization to learning via gradient concentration
- Concentration estimates for learning with unbounded sampling
- Generalization bounds of regularization algorithms derived simultaneously through hypothesis space complexity, algorithmic stability and data quality
- Learning rates for \(l^1\)-regularized kernel classifiers
- On convergence of kernel learning estimators
learning theoryconcentration estimate for error analysis\(\ell ^{1}\)-regularizer and sparsity\(\ell ^{2}\)-empirical covering numberdata dependent hypothesis space
General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05)
Cites Work
- Weak convergence and empirical processes. With applications to statistics
- Title not available (Why is that?)
- Title not available (Why is that?)
- Theory of Reproducing Kernels
- Support Vector Machines
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- Optimal rates for the regularized least-squares algorithm
- Sparsity and incoherence in compressive sampling
- Title not available (Why is that?)
- Local polynomial reproduction and moving least squares approximation
- Shannon sampling and function reconstruction from point values
- Leave-One-Out Bounds for Kernel Methods
- Neural Network Learning
- The covering number in learning theory
- Learning with sample dependent hypothesis spaces
- Learning rates of least-square regularized regression
- Learning theory estimates via integral operators and their approximations
- Capacity of reproducing kernel spaces in learning theory
- Multi-kernel regularized classifiers
- Fast rates for support vector machines using Gaussian kernels
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Model selection for regularized least-squares algorithm in learning theory
- SVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDS
- ONLINE LEARNING WITH MARKOV SAMPLING
- On Complexity Issues of Online Learning Algorithms
- Learning by nonsymmetric kernels with data dependent spaces and \(\ell^1\)-regularizer
Cited In (63)
- Iterative kernel regression with preconditioning
- Coefficient-based regularized distribution regression
- Fully corrective gradient boosting with squared hinge: fast learning rates and early stopping
- Learning and approximating piecewise smooth functions by deep sigmoid neural networks
- Kernel-based sparse regression with the correntropy-induced loss
- Constructive analysis for least squares regression with generalized \(K\)-norm regularization
- Error analysis for \(l^q\)-coefficient regularized moving least-square regression
- Learning with correntropy-induced losses for regression with mixture of symmetric stable noise
- Online pairwise learning algorithms with convex loss functions
- A simpler approach to coefficient regularized support vector machines regression
- Indefinite kernel network with \(l^q\)-norm regularization
- Generalization Analysis of Fredholm Kernel Regularized Classifiers
- Nyström subsampling method for coefficient-based regularized regression
- Multi-task learning in vector-valued reproducing kernel Banach spaces with the \(\ell^1\) norm
- Distributed learning with indefinite kernels
- An approximation theory approach to learning with \(\ell^1\) regularization
- An empirical feature-based learning algorithm producing sparse approximations
- Reproducing kernel Banach spaces with the \(\ell^{1}\) norm. II: Error analysis for regularized least square regression
- Title not available (Why is that?)
- Learning with Convex Loss and Indefinite Kernels
- Gradient descent for robust kernel-based regression
- Regularized modal regression with data-dependent hypothesis spaces
- Sparse additive machine with ramp loss
- Statistical consistency of coefficient-based conditional quantile regression
- Learning with coefficient-based regularization and \(\ell^1\)-penalty
- Nonparametric regression using needlet kernels for spherical data
- Quantile regression with \(\ell_1\)-regularization and Gaussian kernels
- Distributed learning with multi-penalty regularization
- Half supervised coefficient regularization for regression learning with unbounded sampling
- Distributed regression learning with coefficient regularization
- Title not available (Why is that?)
- Error Analysis of Coefficient-Based Regularized Algorithm for Density-Level Detection
- Convergence rate of the semi-supervised greedy algorithm
- Learning theory approach to a system identification problem involving atomic norm
- Coefficient-based \(l^q\)-regularized regression with indefinite kernels and unbounded sampling
- Optimality of the rescaled pure greedy learning algorithms
- Sparsity and error analysis of empirical feature-based regularization schemes
- Learning rates for regularized least squares ranking algorithm
- Multikernel Regression with Sparsity Constraint
- Learning Theory of Randomized Sparse Kaczmarz Method
- Modal additive models with data-driven structure identification
- On the convergence rate of kernel-based sequential greedy regression
- Learning by atomic norm regularization with polynomial kernels
- On reproducing kernel Banach spaces: generic definitions and unified framework of constructions
- Distributed semi-supervised regression learning with coefficient regularization
- Coefficient-based regularization network with variance loss for error
- Constructive analysis for coefficient regularization regression algorithms
- Discussion of the paper ``On concentration for (regularized) empirical risk minimization
- Optimal rates for coefficient-based regularized regression
- Approximation on variable exponent spaces by linear integral operators
- Error analysis for coefficient-based regularized regression in additive models
- Sharp learning rates of coefficient-based \(l^q\)-regularized regression with indefinite kernels
- Learning sparse and smooth functions by deep sigmoid nets
- Parameter choices for sparse regularization with the ℓ1 norm *
- Learning Rates of lq Coefficient Regularization Learning with Gaussian Kernel
- Learning Rates for Classification with Gaussian Kernels
- Learning by nonsymmetric kernels with data dependent spaces and \(\ell^1\)-regularizer
- Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery
- Toward Efficient Ensemble Learning with Structure Constraints: Convergent Algorithms and Applications
- Title not available (Why is that?)
- On grouping effect of elastic net
- Least squares regression with \(l_1\)-regularizer in sum space
- Distributed learning with partial coefficients regularization
This page was built for publication: Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q550498)