Concentration estimates for learning with ^1-regularizer and data dependent hypothesis spaces
From MaRDI portal
Publication:550498
Recommendations
- An approximation theory approach to learning with \(\ell^1\) regularization
- On concentration for (regularized) empirical risk minimization
- The convergence rate of learning algorithms for least square regression with sample dependent hypothesis spaces
- Learning by nonsymmetric kernels with data dependent spaces and \(\ell^1\)-regularizer
- Concentration estimates for the moving least-square method in learning theory
- From inexact optimization to learning via gradient concentration
- Concentration estimates for learning with unbounded sampling
- Generalization bounds of regularization algorithms derived simultaneously through hypothesis space complexity, algorithmic stability and data quality
- Learning rates for \(l^1\)-regularized kernel classifiers
- On convergence of kernel learning estimators
Cites work
- scientific article; zbMATH DE number 5957408 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- scientific article; zbMATH DE number 962825 (Why is no real title available?)
- Capacity of reproducing kernel spaces in learning theory
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Fast rates for support vector machines using Gaussian kernels
- Learning by nonsymmetric kernels with data dependent spaces and \(\ell^1\)-regularizer
- Learning rates of least-square regularized regression
- Learning theory estimates via integral operators and their approximations
- Learning with sample dependent hypothesis spaces
- Leave-One-Out Bounds for Kernel Methods
- Local polynomial reproduction and moving least squares approximation
- Model selection for regularized least-squares algorithm in learning theory
- Multi-kernel regularized classifiers
- Neural Network Learning
- ONLINE LEARNING WITH MARKOV SAMPLING
- On Complexity Issues of Online Learning Algorithms
- Optimal rates for the regularized least-squares algorithm
- SVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDS
- Shannon sampling and function reconstruction from point values
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- Sparsity and incoherence in compressive sampling
- Support Vector Machines
- The covering number in learning theory
- Theory of Reproducing Kernels
- Weak convergence and empirical processes. With applications to statistics
Cited in
(63)- Nonparametric regression using needlet kernels for spherical data
- Modal additive models with data-driven structure identification
- Kernel-based sparse regression with the correntropy-induced loss
- Generalization analysis of Fredholm kernel regularized classifiers
- Toward Efficient Ensemble Learning with Structure Constraints: Convergent Algorithms and Applications
- Constructive analysis for least squares regression with generalized \(K\)-norm regularization
- Constructive analysis for coefficient regularization regression algorithms
- Optimal rates for coefficient-based regularized regression
- Distributed semi-supervised regression learning with coefficient regularization
- Kernelized elastic net regularization: generalization bounds, and sparse recovery
- Distributed learning with indefinite kernels
- Statistical consistency of coefficient-based conditional quantile regression
- Online pairwise learning algorithms with convex loss functions
- Error analysis for coefficient-based regularized regression in additive models
- Gradient descent for robust kernel-based regression
- Learning theory approach to a system identification problem involving atomic norm
- Optimality of the rescaled pure greedy learning algorithms
- Sparse kernel regression with coefficient-based \(\ell_q\)-regularization
- An empirical feature-based learning algorithm producing sparse approximations
- Discussion of the paper ``On concentration for (regularized) empirical risk minimization
- Nyström subsampling method for coefficient-based regularized regression
- Error analysis for \(l^q\)-coefficient regularized moving least-square regression
- A simpler approach to coefficient regularized support vector machines regression
- Indefinite kernel network with \(l^q\)-norm regularization
- Learning Rates of lq Coefficient Regularization Learning with Gaussian Kernel
- Quantile regression with \(\ell_1\)-regularization and Gaussian kernels
- Regularized modal regression with data-dependent hypothesis spaces
- Learning rates for regularized least squares ranking algorithm
- Error Analysis of Coefficient-Based Regularized Algorithm for Density-Level Detection
- Multikernel regression with sparsity constraint
- Sharp learning rates of coefficient-based \(l^q\)-regularized regression with indefinite kernels
- Sparsity and error analysis of empirical feature-based regularization schemes
- Learning sparse and smooth functions by deep sigmoid nets
- Learning rates for classification with Gaussian kernels
- On the convergence rate of kernel-based sequential greedy regression
- Distributed learning with multi-penalty regularization
- Reproducing kernel Banach spaces with the \(\ell^{1}\) norm. II: Error analysis for regularized least square regression
- Distributed learning with regularized least squares
- Learning by atomic norm regularization with polynomial kernels
- Boosted kernel ridge regression: optimal learning rates and early stopping
- Parameter choices for sparse regularization with the ℓ1 norm *
- Learning with coefficient-based regularization and \(\ell^1\)-penalty
- Learning by nonsymmetric kernels with data dependent spaces and \(\ell^1\)-regularizer
- On reproducing kernel Banach spaces: generic definitions and unified framework of constructions
- Least squares regression with \(l_1\)-regularizer in sum space
- Learning with correntropy-induced losses for regression with mixture of symmetric stable noise
- Sparse additive machine with ramp loss
- On grouping effect of elastic net
- Half supervised coefficient regularization for regression learning with unbounded sampling
- Coefficient-based \(l^q\)-regularized regression with indefinite kernels and unbounded sampling
- An approximation theory approach to learning with \(\ell^1\) regularization
- Distributed regression learning with coefficient regularization
- Coefficient-based regularization network with variance loss for error
- Approximation on variable exponent spaces by linear integral operators
- Learning theory of randomized sparse Kaczmarz method
- Convergence rate of the semi-supervised greedy algorithm
- Multi-task learning in vector-valued reproducing kernel Banach spaces with the \(\ell^1\) norm
- Distributed learning with partial coefficients regularization
- Learning with convex loss and indefinite kernels
- Learning and approximating piecewise smooth functions by deep sigmoid neural networks
- Coefficient-based regularized distribution regression
- Fully corrective gradient boosting with squared hinge: fast learning rates and early stopping
- Iterative kernel regression with preconditioning
This page was built for publication: Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q550498)