A new perspective on boosting in linear regression via subgradient optimization and relatives
From MaRDI portal
Publication:682283
DOI10.1214/16-AOS1505zbMATH Open1421.62086arXiv1505.04243MaRDI QIDQ682283FDOQ682283
Authors: Robert M. Freund, Paul Grigas, Rahul Mazumder
Publication date: 14 February 2018
Published in: The Annals of Statistics (Search for Journal in Brave)
Abstract: In this paper we analyze boosting algorithms in linear regression from a new perspective: that of modern first-order methods in convex optimization. We show that classic boosting algorithms in linear regression, namely the incremental forward stagewise algorithm (FS) and least squares boosting (LS-Boost()), can be viewed as subgradient descent to minimize the loss function defined as the maximum absolute correlation between the features and residuals. We also propose a modification of FS that yields an algorithm for the Lasso, and that may be easily extended to an algorithm that computes the Lasso path for different values of the regularization parameter. Furthermore, we show that these new algorithms for the Lasso may also be interpreted as the same master algorithm (subgradient descent), applied to a regularized version of the maximum absolute correlation loss function. We derive novel, comprehensive computational guarantees for several boosting algorithms in linear regression (including LS-Boost() and FS) by using techniques of modern first-order methods in convex optimization. Our computational guarantees inform us about the statistical properties of boosting algorithms. In particular they provide, for the first time, a precise theoretical description of the amount of data-fidelity and regularization imparted by running a boosting algorithm with a prespecified learning rate for a fixed but arbitrary number of iterations, for any dataset.
Full work available at URL: https://arxiv.org/abs/1505.04243
Recommendations
Cited In (11)
- Discussion of ``Best subset, forward stepwise or Lasso? Analysis and recommendations based on extensive comparisons
- Properties of subgradient projection iteration when applying to linear imaging system
- Restricted strong convexity implies weak submodularity
- Boosting with structural sparsity: a differential inclusion approach
- Characterizing \(L_{2}\)Boosting
- On the selection of predictors by using greedy algorithms and information theoretic criteria
- Pinball boosting of regression quantiles
- Title not available (Why is that?)
- New analysis and results for the Frank-Wolfe method
- Randomized Gradient Boosting Machine
- A precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiers
This page was built for publication: A new perspective on boosting in linear regression via subgradient optimization and relatives
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q682283)