Consistency and robustness of kernel-based regression in convex risk minimization

From MaRDI portal
Publication:2469652

DOI10.3150/07-BEJ5102zbMATH Open1129.62031arXiv0709.0626WikidataQ59196404 ScholiaQ59196404MaRDI QIDQ2469652FDOQ2469652


Authors: Andreas Christmann, Ingo Steinwart Edit this on Wikidata


Publication date: 6 February 2008

Published in: Bernoulli (Search for Journal in Brave)

Abstract: We investigate statistical properties for a broad class of modern kernel-based regression (KBR) methods. These kernel methods were developed during the last decade and are inspired by convex risk minimization in infinite-dimensional Hilbert spaces. One leading example is support vector regression. We first describe the relationship between the loss function L of the KBR method and the tail of the response variable. We then establish the L-risk consistency for KBR which gives the mathematical justification for the statement that these methods are able to ``learn. Then we consider robustness properties of such kernel methods. In particular, our results allow us to choose the loss function and the kernel to obtain computationally tractable and consistent KBR methods that have bounded influence functions. Furthermore, bounds for the bias and for the sensitivity curve, which is a finite sample version of the influence function, are developed, and the relationship between KBR and classical M estimators is discussed.


Full work available at URL: https://arxiv.org/abs/0709.0626




Recommendations




Cites Work


Cited In (45)





This page was built for publication: Consistency and robustness of kernel-based regression in convex risk minimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2469652)