Regularization in kernel learning

From MaRDI portal
Revision as of 14:37, 30 January 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:847647

DOI10.1214/09-AOS728zbMath1191.68356arXiv1001.2094MaRDI QIDQ847647

Joseph Neeman, Shahar Mendelson

Publication date: 19 February 2010

Published in: The Annals of Statistics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1001.2094




Related Items (42)

Efficient kernel-based variable selection with sparsistencyIvanov-Regularised Least-Squares Estimators over Large RKHSs and Their Interpolation SpacesLeast-squares regularized regression with dependent samples andq-penaltyUnnamed ItemUnnamed ItemIntegral operator approach to learning theory with unbounded samplingLearning with coefficient-based regularization and \(\ell^1\)-penalty\(\ell _{1}\)-regularized linear regression: persistence and oracle inequalitiesOptimal learning rates for least squares regularized regression with unbounded samplingConvex regularization in statistical inverse learning problemsOrthogonal statistical learningMeasuring Complexity of Learning Schemes Using Hessian-Schatten Total VariationStructure learning via unstructured kernel-based M-estimationConcentration estimates for learning with unbounded samplingMercer's theorem on general domains: on the interaction between measures, kernels, and RKHSsConsistency of support vector machines using additive kernels for additive modelsEstimating conditional quantiles with the help of the pinball lossOptimal regression rates for SVMs using Gaussian kernelsERM and RERM are optimal estimators for regression problems when malicious outliers corrupt the labelsGeneral nonexact oracle inequalities for classes with a subexponential envelopeFast learning from \(\alpha\)-mixing observationsLearning with Convex Loss and Indefinite KernelsLearning Theory Estimates with Observations from General Stationary Stochastic ProcessesOptimal rates for regularization of statistical inverse learning problemsLearning rates for kernel-based expectile regressionKernel variable selection for multicategory support vector machinesA short note on the comparison of interpolation widths, entropy numbers, and Kolmogorov widthsLearning Rates of lq Coefficient Regularization Learning with Gaussian KernelCoefficient-based regression with non-identical unbounded samplingDistributed regularized least squares with flexible Gaussian kernelsGeneralized support vector regression: Duality and tensor-kernel representationConvergence Rates of Spectral Regularization Methods: A Comparison between Ill-Posed Inverse Problems and Statistical Kernel LearningFast and strong convergence of online learning algorithmsAsymptotic normality of support vector machine variants and other regularized kernel methodsMultikernel Regression with Sparsity ConstraintUnnamed ItemOn the speed of uniform convergence in Mercer's theoremOptimal learning with Gaussians and correntropy lossUnnamed ItemUnnamed ItemUnnamed ItemThresholded spectral algorithms for sparse approximations




Cites Work




This page was built for publication: Regularization in kernel learning