Regularization in kernel learning
From MaRDI portal
Publication:847647
DOI10.1214/09-AOS728zbMATH Open1191.68356arXiv1001.2094MaRDI QIDQ847647FDOQ847647
Authors: Shahar Mendelson, Joseph Neeman
Publication date: 19 February 2010
Published in: The Annals of Statistics (Search for Journal in Brave)
Abstract: Under mild assumptions on the kernel, we obtain the best known error rates in a regularized learning scenario taking place in the corresponding reproducing kernel Hilbert space (RKHS). The main novelty in the analysis is a proof that one can use a regularization term that grows significantly slower than the standard quadratic growth in the RKHS norm.
Full work available at URL: https://arxiv.org/abs/1001.2094
Recommendations
- Learning the kernel function via regularization
- Some properties of regularized kernel methods
- Regularization, optimization, kernels, and support vector machines
- Ideal regularization for learning kernels from labels
- scientific article; zbMATH DE number 1551793
- Kernel Based Learning Methods: Regularization Networks and RBF Networks
- Regularization in a functional reproducing kernel Hilbert space
- scientific article; zbMATH DE number 5116564
- The optimal solution of multi-kernel regularization learning
Cites Work
- Learning Theory
- On the mathematical foundations of learning
- Sharper bounds for Gaussian and empirical processes
- About the constants in Talagrand's concentration inequalities for empirical processes.
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Local Rademacher complexities
- The concentration of measure phenomenon
- Uniform Central Limit Theorems
- Concentration inequalities and model selection. Ecole d'Eté de Probabilités de Saint-Flour XXXIII -- 2003.
- Optimal rates for the regularized least-squares algorithm
- Title not available (Why is that?)
- The Generic Chaining
- \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities
- Some limit theorems for empirical processes (with discussion)
- The importance of convexity in learning with squared loss
- 10.1162/1532443041424337
- The covering number in learning theory
- Learning rates of least-square regularized regression
- Learning theory estimates via integral operators and their approximations
- Fast rates for support vector machines using Gaussian kernels
- Random vectors in the isotropic position
- Statistical performance of support vector machines
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- Title not available (Why is that?)
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Regularity of Gaussian processes
- Reconstruction and subgaussian operators in asymptotic geometric analysis
- Empirical minimization
- ESTIMATES OF SINGULAR NUMBERS OF INTEGRAL OPERATORS
- Generalization performance of regularization networks and support vector machines via entropy numbers of compact operators
- Obtaining fast error rates in nonconvex situations
- \(L_{p}\)-moments of random vectors via majorizing measures
- Subspaces and orthogonal decompositions generated by bounded orthogonal systems
- Title not available (Why is that?)
- FAST RATES FOR ESTIMATION ERROR AND ORACLE INEQUALITIES FOR MODEL SELECTION
- On weakly bounded empirical processes
Cited In (64)
- Nonparametric augmented probability weighting with sparsity
- Measuring Complexity of Learning Schemes Using Hessian-Schatten Total Variation
- Least squares approximations in linear statistical inverse learning problems
- Spectral regularized Kernel two-sample tests
- Structure learning via unstructured kernel-based M-estimation
- Learning with centered reproducing kernels
- Convergence Rates of Spectral Regularization Methods: A Comparison between Ill-Posed Inverse Problems and Statistical Kernel Learning
- Multikernel regression with sparsity constraint
- Orthogonal statistical learning
- On the regularization of convolutional kernel tensors in neural networks
- Error analysis on regularized learning
- Title not available (Why is that?)
- Fast and strong convergence of online learning algorithms
- Kernel regression, minimax rates and effective dimensionality: beyond the regular case
- Ideal regularization for learning kernels from labels
- Kernel variable selection for multicategory support vector machines
- Distributed learning with regularized least squares
- Coefficient-based regression with non-identical unbounded sampling
- Learning rates for the kernel regularized regression with a differentiable strongly convex loss
- Least-squares regularized regression with dependent samples and \(q\)-penalty
- Thresholded spectral algorithms for sparse approximations
- ERM and RERM are optimal estimators for regression problems when malicious outliers corrupt the labels
- Learning with coefficient-based regularization and \(\ell^1\)-penalty
- Efficient kernel-based variable selection with sparsistency
- Optimal learning rates for least squares regularized regression with unbounded sampling
- Estimating conditional quantiles with the help of the pinball loss
- \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities
- Distributed minimum error entropy algorithms
- On convergence of kernel learning estimators
- Learning the kernel function via regularization
- Distributed regularized least squares with flexible Gaussian kernels
- Hide
- Optimal rates for regularization of statistical inverse learning problems
- Consistency of support vector machines using additive kernels for additive models
- Learning with convex loss and indefinite kernels
- Title not available (Why is that?)
- Sparse kernel regression with coefficient-based \(\ell_q\)-regularization
- Asymptotic normality of support vector machine variants and other regularized kernel methods
- Concentration estimates for learning with unbounded sampling
- Learning by atomic norm regularization with polynomial kernels
- Generalized support vector regression: duality and tensor-kernel representation
- A meta-learning approach to the regularized learning -- case study: blood glucose prediction
- Approximate minimization of the regularized expected error over kernel models
- General nonexact oracle inequalities for classes with a subexponential envelope
- Sobolev norm learning rates for regularized least-squares algorithms
- Learning sets with separating kernels
- Feature space perspectives for learning the kernel
- Optimal regression rates for SVMs using Gaussian kernels
- Some properties of regularized kernel methods
- Integral operator approach to learning theory with unbounded sampling
- Regularization techniques for learning with matrices
- Mercer's theorem on general domains: on the interaction between measures, kernels, and RKHSs
- Learning Rates of lq Coefficient Regularization Learning with Gaussian Kernel
- A short note on the comparison of interpolation widths, entropy numbers, and Kolmogorov widths
- Fast learning rates for regularized regression algorithms
- Convex regularization in statistical inverse learning problems
- Optimal learning with Gaussians and correntropy loss
- Learning rates for kernel-based expectile regression
- Learning Bounds for Kernel Regression Using Effective Data Dimensionality
- Fast learning from \(\alpha\)-mixing observations
- On the speed of uniform convergence in Mercer's theorem
- Statistical performance of optimal scoring in reproducing kernel Hilbert spaces
- Learning theory estimates with observations from general stationary stochastic processes
- Ivanov-regularised least-squares estimators over large RKHSs and their interpolation spaces
This page was built for publication: Regularization in kernel learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q847647)