Accelerated gradient boosting
From MaRDI portal
Publication:2425242
Abstract: Gradient tree boosting is a prediction algorithm that sequentially produces a model in the form of linear combinations of decision trees, by solving an infinite-dimensional optimization problem. We combine gradient boosting and Nesterov's accelerated descent to design a new algorithm, which we call AGB (for Accelerated Gradient Boosting). Substantial numerical evidence is provided on both synthetic and real-life data sets to assess the excellent performance of the method in a large variety of prediction problems. It is empirically shown that AGB is much less sensitive to the shrinkage parameter and outputs predictors that are considerably more sparse in the number of trees, while retaining the exceptional performance of gradient boosting.
Recommendations
Cites work
- scientific article; zbMATH DE number 3850830 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- scientific article; zbMATH DE number 893887 (Why is no real title available?)
- 10.1162/1532443041424319
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A decision-theoretic generalization of on-line learning and an application to boosting
- A differential equation for modeling Nesterov's accelerated gradient method: theory and insights
- Accelerated Distributed Nesterov Gradient Descent
- AdaBoost is consistent
- Additive logistic regression: a statistical view of boosting. (With discussion and a rejoinder by the authors)
- Arcing classifiers. (With discussion)
- Boosting With theL2Loss
- Boosting a weak learning algorithm by majority
- Boosting algorithms: regularization, prediction and model fitting
- Boosting with early stopping: convergence and consistency
- COBRA: a combined regression strategy
- Convex optimization: algorithms and complexity
- First-order methods of smooth convex optimization with inexact oracle
- Gradient methods for minimizing composite functions
- Greedy function approximation: A gradient boosting machine.
- Introductory lectures on convex optimization. A basic course.
- NESTA: A fast and accurate first-order method for sparse recovery
- On the Bayes-risk consistency of regularized boosting methods.
- Optimization by Gradient Boosting
- Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification
- Population theory for boosting ensembles.
- Prediction Games and Arcing Algorithms
- Random forests
- Smooth minimization of non-smooth functions
- Some theory for generalized boosting algorithms
- Stochastic gradient boosting.
Cited in
(14)- Gradient boosting for convex cone predict and optimize problems
- A new accelerated proximal boosting machine with convergence rate \(O(1/t^2)\)
- A large-sample theory for infinitesimal gradient boosting
- Random gradient boosting for predicting conditional quantiles
- Quadratic boosting
- Wavelet-based gradient boosting
- Benchmark for filter methods for feature selection in high-dimensional classification data
- Extending models via gradient boosting: an application to Mendelian models
- Gradient boosting for high-dimensional prediction of rare events
- Fully corrective gradient boosting with squared hinge: fast learning rates and early stopping
- Greedy function approximation: A gradient boosting machine.
- Randomized Gradient Boosting Machine
- Method for improving gradient boosting learning efficiency based on modified loss functions
- Stochastic gradient boosting.
This page was built for publication: Accelerated gradient boosting
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2425242)