Fully corrective gradient boosting with squared hinge: fast learning rates and early stopping
From MaRDI portal
Publication:6072435
DOI10.1016/j.neunet.2021.12.016arXiv2004.00179OpenAlexW3015163340WikidataQ114950248 ScholiaQ114950248MaRDI QIDQ6072435
No author found.
Publication date: 13 October 2023
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2004.00179
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Greedy function approximation: A gradient boosting machine.
- Classification with Gaussians and convex loss. II: Improving error bounds by noise conditions
- Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces
- Multi-kernel regularized classifiers
- Fast rates for support vector machines using Gaussian kernels
- A dual algorithm for the solution of nonlinear variational problems via finite element approximation
- A decision-theoretic generalization of on-line learning and an application to boosting
- Introductory lectures on convex optimization. A basic course.
- Distributed kernel-based gradient descent algorithms
- A distribution-free theory of nonparametric regression
- Additive logistic regression: a statistical view of boosting. (With discussion and a rejoinder by the authors)
- Boosting a weak learning algorithm by majority
- Learning rates of multi-kernel regression by orthogonal greedy algorithm
- Accelerated gradient boosting
- Forward stagewise regression and the monotone lasso
- Approximation and learning by greedy algorithms
- Boosting with early stopping: convergence and consistency
- On the $O(1/n)$ Convergence Rate of the Douglas–Rachford Alternating Direction Method
- The Rate of Convergence of AdaBoost
- Trading Accuracy for Sparsity in Optimization Problems with Sparsity Constraints
- Lower bounds for the rate of convergence of greedy algorithms
- Learning Theory
- Support Vector Machines
- Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit
- Greedy approximation
- 10.1162/15324430152748218
- Boosting With theL2Loss
- SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
- An $L_{2}$-Boosting Algorithm for Estimation of a Regression Function
- Early Stopping for Kernel Boosting Algorithms: A General Analysis With Localized Complexities
- Learning Rates for Classification with Gaussian Kernels
- Convexity, Classification, and Risk Bounds
- The elements of statistical learning. Data mining, inference, and prediction
- Random forests
This page was built for publication: Fully corrective gradient boosting with squared hinge: fast learning rates and early stopping