A new concentration result for regularized risk minimizers

From MaRDI portal
Publication:3592321

DOI10.1214/074921706000000897zbMath1127.68090arXivmath/0612779OpenAlexW1674949483MaRDI QIDQ3592321

Ingo Steinwart, Don Hush, Clint Scovel

Publication date: 12 September 2007

Published in: High Dimensional Probability (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/math/0612779




Related Items

Online gradient descent algorithms for functional data learningIvanov-Regularised Least-Squares Estimators over Large RKHSs and Their Interpolation SpacesPartially linear functional quantile regression in a reproducing kernel Hilbert spaceNonparametric stochastic approximation with large step-sizesA partially linear framework for massive heterogeneous dataLower bounds for invariant statistical models with applications to principal component analysisUnnamed ItemMultiscale regression on unknown manifoldsRadial kernels and their reproducing kernel Hilbert spacesOptimal learning rates for least squares regularized regression with unbounded samplingStatistical performance of support vector machinesError analysis for coefficient-based regularized regression in additive modelsConsistency of support vector machines using additive kernels for additive modelsEstimation of convergence rate for multi-regression learning algorithmSelf-concordant analysis for logistic regressionRefined Generalization Bounds of Gradient Learning over Reproducing Kernel Hilbert SpacesLearning rates for kernel-based expectile regressionThe Goldenshluger-Lepski method for constrained least-squares estimators over RKHSsLearning from dependent observationsUnnamed ItemOracle inequalities for support vector machines that are based on random entropy numbersFast and strong convergence of online learning algorithmsAsymptotic normality of support vector machine variants and other regularized kernel methodsLearning under \((1 + \epsilon)\)-moment conditionsUnnamed ItemUnnamed Item