Accelerated gradient boosting
From MaRDI portal
Publication:2425242
DOI10.1007/S10994-019-05787-1zbMATH Open1493.68293arXiv1803.02042OpenAlexW2794238513WikidataQ128448954 ScholiaQ128448954MaRDI QIDQ2425242FDOQ2425242
Benoît Cadre, Gérard Biau, Laurent Rouvière
Publication date: 26 June 2019
Published in: Machine Learning (Search for Journal in Brave)
Abstract: Gradient tree boosting is a prediction algorithm that sequentially produces a model in the form of linear combinations of decision trees, by solving an infinite-dimensional optimization problem. We combine gradient boosting and Nesterov's accelerated descent to design a new algorithm, which we call AGB (for Accelerated Gradient Boosting). Substantial numerical evidence is provided on both synthetic and real-life data sets to assess the excellent performance of the method in a large variety of prediction problems. It is empirically shown that AGB is much less sensitive to the shrinkage parameter and outputs predictors that are considerably more sparse in the number of trees, while retaining the exceptional performance of gradient boosting.
Full work available at URL: https://arxiv.org/abs/1803.02042
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- NESTA: A Fast and Accurate First-Order Method for Sparse Recovery
- A decision-theoretic generalization of on-line learning and an application to boosting
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- COBRA: a combined regression strategy
- Greedy function approximation: A gradient boosting machine.
- Random forests
- Smooth minimization of non-smooth functions
- Introductory lectures on convex optimization. A basic course.
- Additive logistic regression: a statistical view of boosting. (With discussion and a rejoinder by the authors)
- Gradient methods for minimizing composite functions
- Boosting algorithms: regularization, prediction and model fitting
- Boosting with early stopping: convergence and consistency
- Arcing classifiers. (With discussion)
- Boosting With theL2Loss
- Prediction Games and Arcing Algorithms
- Stochastic gradient boosting.
- Optimization by Gradient Boosting
- First-order methods of smooth convex optimization with inexact oracle
- Boosting a weak learning algorithm by majority
- A differential equation for modeling Nesterov's accelerated gradient method: theory and insights
- Convex optimization: algorithms and complexity
- On the Bayes-risk consistency of regularized boosting methods.
- Population theory for boosting ensembles.
- 10.1162/1532443041424319
- Accelerated Distributed Nesterov Gradient Descent
Cited In (6)
- A new accelerated proximal boosting machine with convergence rate \(O(1/t^2)\)
- Quadratic boosting
- Benchmark for filter methods for feature selection in high-dimensional classification data
- Fully corrective gradient boosting with squared hinge: fast learning rates and early stopping
- Greedy function approximation: A gradient boosting machine.
- Randomized Gradient Boosting Machine
Uses Software
This page was built for publication: Accelerated gradient boosting
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2425242)