Regularization in kernel learning
From MaRDI portal
Publication:847647
DOI10.1214/09-AOS728zbMath1191.68356arXiv1001.2094MaRDI QIDQ847647
Joseph Neeman, Shahar Mendelson
Publication date: 19 February 2010
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1001.2094
Related Items (42)
Efficient kernel-based variable selection with sparsistency ⋮ Ivanov-Regularised Least-Squares Estimators over Large RKHSs and Their Interpolation Spaces ⋮ Least-squares regularized regression with dependent samples andq-penalty ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Integral operator approach to learning theory with unbounded sampling ⋮ Learning with coefficient-based regularization and \(\ell^1\)-penalty ⋮ \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities ⋮ Optimal learning rates for least squares regularized regression with unbounded sampling ⋮ Convex regularization in statistical inverse learning problems ⋮ Orthogonal statistical learning ⋮ Measuring Complexity of Learning Schemes Using Hessian-Schatten Total Variation ⋮ Structure learning via unstructured kernel-based M-estimation ⋮ Concentration estimates for learning with unbounded sampling ⋮ Mercer's theorem on general domains: on the interaction between measures, kernels, and RKHSs ⋮ Consistency of support vector machines using additive kernels for additive models ⋮ Estimating conditional quantiles with the help of the pinball loss ⋮ Optimal regression rates for SVMs using Gaussian kernels ⋮ ERM and RERM are optimal estimators for regression problems when malicious outliers corrupt the labels ⋮ General nonexact oracle inequalities for classes with a subexponential envelope ⋮ Fast learning from \(\alpha\)-mixing observations ⋮ Learning with Convex Loss and Indefinite Kernels ⋮ Learning Theory Estimates with Observations from General Stationary Stochastic Processes ⋮ Optimal rates for regularization of statistical inverse learning problems ⋮ Learning rates for kernel-based expectile regression ⋮ Kernel variable selection for multicategory support vector machines ⋮ A short note on the comparison of interpolation widths, entropy numbers, and Kolmogorov widths ⋮ Learning Rates of lq Coefficient Regularization Learning with Gaussian Kernel ⋮ Coefficient-based regression with non-identical unbounded sampling ⋮ Distributed regularized least squares with flexible Gaussian kernels ⋮ Generalized support vector regression: Duality and tensor-kernel representation ⋮ Convergence Rates of Spectral Regularization Methods: A Comparison between Ill-Posed Inverse Problems and Statistical Kernel Learning ⋮ Fast and strong convergence of online learning algorithms ⋮ Asymptotic normality of support vector machine variants and other regularized kernel methods ⋮ Multikernel Regression with Sparsity Constraint ⋮ Unnamed Item ⋮ On the speed of uniform convergence in Mercer's theorem ⋮ Optimal learning with Gaussians and correntropy loss ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Thresholded spectral algorithms for sparse approximations
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Regularity of Gaussian processes
- Some limit theorems for empirical processes (with discussion)
- \(L_{p}\)-moments of random vectors via majorizing measures
- Concentration inequalities and model selection. Ecole d'Eté de Probabilités de Saint-Flour XXXIII -- 2003.
- Subspaces and orthogonal decompositions generated by bounded orthogonal systems
- Obtaining fast error rates in nonconvex situations
- Fast rates for support vector machines using Gaussian kernels
- Random vectors in the isotropic position
- Sharper bounds for Gaussian and empirical processes
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- About the constants in Talagrand's concentration inequalities for empirical processes.
- The covering number in learning theory
- \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Optimal rates for the regularized least-squares algorithm
- Statistical performance of support vector machines
- On weakly bounded empirical processes
- Reconstruction and subgaussian operators in asymptotic geometric analysis
- Empirical minimization
- Learning rates of least-square regularized regression
- Local Rademacher complexities
- Learning theory estimates via integral operators and their approximations
- On the mathematical foundations of learning
- Learning Theory
- FAST RATES FOR ESTIMATION ERROR AND ORACLE INEQUALITIES FOR MODEL SELECTION
- ESTIMATES OF SINGULAR NUMBERS OF INTEGRAL OPERATORS
- Uniform Central Limit Theorems
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Generalization performance of regularization networks and support vector machines via entropy numbers of compact operators
- The Generic Chaining
- The importance of convexity in learning with squared loss
- 10.1162/1532443041424337
This page was built for publication: Regularization in kernel learning