Toward Efficient Ensemble Learning with Structure Constraints: Convergent Algorithms and Applications
From MaRDI portal
Publication:5060788
DOI10.1287/ijoc.2022.1224OpenAlexW4292377237MaRDI QIDQ5060788
Yao Wang, Shaojie Tang, Unnamed Author, Shao-Bo Lin
Publication date: 11 January 2023
Published in: INFORMS Journal on Computing (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1287/ijoc.2022.1224
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Greedy function approximation: A gradient boosting machine.
- Bagging predictors
- Smooth minimization of non-smooth functions
- Characterizing \(L_{2}\)Boosting
- Boosting algorithms: regularization, prediction and model fitting
- Logistic classification with varying gaussians
- Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces
- Fast rates for support vector machines using Gaussian kernels
- A dual algorithm for the solution of nonlinear variational problems via finite element approximation
- A decision-theoretic generalization of on-line learning and an application to boosting
- Optimal learning rates for kernel partial least squares
- Distributed kernel-based gradient descent algorithms
- A distribution-free theory of nonparametric regression
- Additive logistic regression: a statistical view of boosting. (With discussion and a rejoinder by the authors)
- Process consistency for AdaBoost.
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Optimal aggregation of classifiers in statistical learning.
- Boosting a weak learning algorithm by majority
- Regularization networks and support vector machines
- Greedy approximation in convex optimization
- Optimal rates for the regularized least-squares algorithm
- Approximation and learning by greedy algorithms
- Boosting with early stopping: convergence and consistency
- On early stopping in gradient descent learning
- Convergence rates of Kernel Conjugate Gradient for random design regression
- A Constraint Programming Approach for Solving a Queueing Design and Control Problem
- Decision-Tree-Based Knowledge Discovery: Single- vs. Multi-Decision-Tree Induction
- The Rate of Convergence of AdaBoost
- Support Vector Machines
- Greedy approximation
- Active Learning with Multiple Localized Regression Models
- Sequential greedy approximation for certain convex optimization problems
- An $L_{2}$-Boosting Algorithm for Estimation of a Regression Function
- Optimization for L1-Norm Error Fitting via Data Aggregation
- Optimization of Tree Ensembles
- Early Stopping for Kernel Boosting Algorithms: A General Analysis With Localized Complexities
- Learning Rates for Classification with Gaussian Kernels
- A Fast Learning Algorithm for Deep Belief Nets
- New analysis and results for the Frank-Wolfe method
This page was built for publication: Toward Efficient Ensemble Learning with Structure Constraints: Convergent Algorithms and Applications