Early Stopping for Kernel Boosting Algorithms: A General Analysis With Localized Complexities

From MaRDI portal
Publication:5211468

DOI10.1109/TIT.2019.2927563zbMATH Open1432.62115arXiv1707.01543OpenAlexW2961238573MaRDI QIDQ5211468FDOQ5211468

Martin J. Wainwright, Fanny Yang, Yuting Wei

Publication date: 28 January 2020

Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)

Abstract: Early stopping of iterative algorithms is a widely-used form of regularization in statistics, commonly used in conjunction with boosting and related gradient-type algorithms. Although consistency results have been established in some settings, such estimators are less well-understood than their analogues based on penalized regularization. In this paper, for a relatively broad class of loss functions and boosting algorithms (including L2-boost, LogitBoost and AdaBoost, among others), we exhibit a direct connection between the performance of a stopped iterate and the localized Gaussian complexity of the associated function class. This connection allows us to show that local fixed point analysis of Gaussian or Rademacher complexities, now standard in the analysis of penalized estimators, can be used to derive optimal stopping rules. We derive such stopping rules in detail for various kernel classes, and illustrate the correspondence of our theory with practice for Sobolev kernel classes.


Full work available at URL: https://arxiv.org/abs/1707.01543







Cited In (7)





This page was built for publication: Early Stopping for Kernel Boosting Algorithms: A General Analysis With Localized Complexities

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5211468)