Regularization in kernel learning

From MaRDI portal
Publication:847647

DOI10.1214/09-AOS728zbMath1191.68356arXiv1001.2094MaRDI QIDQ847647

Joseph Neeman, Shahar Mendelson

Publication date: 19 February 2010

Published in: The Annals of Statistics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1001.2094



Related Items

Efficient kernel-based variable selection with sparsistency, Ivanov-Regularised Least-Squares Estimators over Large RKHSs and Their Interpolation Spaces, Least-squares regularized regression with dependent samples andq-penalty, Unnamed Item, Unnamed Item, Integral operator approach to learning theory with unbounded sampling, Learning with coefficient-based regularization and \(\ell^1\)-penalty, \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities, Optimal learning rates for least squares regularized regression with unbounded sampling, Convex regularization in statistical inverse learning problems, Orthogonal statistical learning, Measuring Complexity of Learning Schemes Using Hessian-Schatten Total Variation, Structure learning via unstructured kernel-based M-estimation, Concentration estimates for learning with unbounded sampling, Mercer's theorem on general domains: on the interaction between measures, kernels, and RKHSs, Consistency of support vector machines using additive kernels for additive models, Estimating conditional quantiles with the help of the pinball loss, Optimal regression rates for SVMs using Gaussian kernels, ERM and RERM are optimal estimators for regression problems when malicious outliers corrupt the labels, General nonexact oracle inequalities for classes with a subexponential envelope, Fast learning from \(\alpha\)-mixing observations, Learning with Convex Loss and Indefinite Kernels, Learning Theory Estimates with Observations from General Stationary Stochastic Processes, Optimal rates for regularization of statistical inverse learning problems, Learning rates for kernel-based expectile regression, Kernel variable selection for multicategory support vector machines, A short note on the comparison of interpolation widths, entropy numbers, and Kolmogorov widths, Learning Rates of lq Coefficient Regularization Learning with Gaussian Kernel, Coefficient-based regression with non-identical unbounded sampling, Distributed regularized least squares with flexible Gaussian kernels, Generalized support vector regression: Duality and tensor-kernel representation, Convergence Rates of Spectral Regularization Methods: A Comparison between Ill-Posed Inverse Problems and Statistical Kernel Learning, Fast and strong convergence of online learning algorithms, Asymptotic normality of support vector machine variants and other regularized kernel methods, Multikernel Regression with Sparsity Constraint, Unnamed Item, On the speed of uniform convergence in Mercer's theorem, Optimal learning with Gaussians and correntropy loss, Unnamed Item, Unnamed Item, Unnamed Item, Thresholded spectral algorithms for sparse approximations



Cites Work