On the Bayes-risk consistency of regularized boosting methods.
From MaRDI portal
Publication:1884602
DOI10.1214/aos/1079120129zbMath1105.62319OpenAlexW2068221105MaRDI QIDQ1884602
Publication date: 5 November 2004
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://projecteuclid.org/euclid.aos/1079120129
classificationempirical processesboostingsmoothing parameterBayes-risk consistencyconvex cost functionspenalized model selection
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Bayesian inference (62F15) Nonparametric inference (62G99)
Related Items
Two-step sparse boosting for high-dimensional longitudinal data with varying coefficients, Deep learning: a statistical viewpoint, On the accuracy of cross-validation in the classification problem, Population theory for boosting ensembles., Statistical behavior and consistency of classification methods based on convex risk minimization., Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder), Fully online classification by regularization, A precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiers, Unnamed Item, Best subset selection, persistence in high-dimensional statistical learning and optimization under \(l_1\) constraint, Multi-kernel regularized classifiers, Universally consistent vertex classification for latent positions graphs, Regularization in statistics, Approximation by multivariate Bernstein-Durrmeyer operators and learning rates of least-squares regularized regression with multivariate polynomial kernels, Infinitesimal gradient boosting, Accelerated gradient boosting, Ranking and empirical minimization of \(U\)-statistics, A simple extension of boosting for asymmetric mislabeled data, Aggregation of estimators and stochastic optimization, Unbiased Boosting Estimation for Censored Survival Data, Recursive aggregation of estimators by the mirror descent algorithm with averaging, Calibrated asymmetric surrogate losses, Further results on the margin explanation of boosting: new algorithm and experiments, Classification with minimax fast rates for classes of Bayes rules with sparse representation, Boosting algorithms: regularization, prediction and model fitting, Random classification noise defeats all convex potential boosters, Bootstrap -- an exploration, Simultaneous adaptation to the margin and to complexity in classification, Cox process functional learning, Optimal rates of aggregation in classification under low noise assumption, Multiclass classification, information, divergence and surrogate risk, On the rate of convergence for multi-category classification based on convex losses, Boosting and instability for regression trees, Learning gradients by a gradient descent algorithm, Learning rates for multi-kernel linear programming classifiers, Density estimation by the penalized combinatorial method, Boosting for high-dimensional linear models, Deformation of log-likelihood loss function for multiclass boosting, New multicategory boosting algorithms based on multicategory Fisher-consistent losses, A boosting method with asymmetric mislabeling probabilities which depend on covariates, Theory of Classification: a Survey of Some Recent Advances, Component-wisely sparse boosting, On the Optimality of Sample-Based Estimates of the Expectation of the Empirical Minimizer, Double machine learning with gradient boosting and its application to the Big \(N\) audit quality effect, SVM-boosting based on Markov resampling: theory and algorithm, On surrogate loss functions and \(f\)-divergences, Convergence rates of generalization errors for margin-based classification, On boosting kernel regression, Nonregular and minimax estimation of individualized thresholds in high dimension with binary responses, AdaBoost and robust one-bit compressed sensing, Complexities of convex combinations and bounding the generalization error in classification, Boosting with early stopping: convergence and consistency, Optimization by Gradient Boosting
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Bagging predictors
- A decision-theoretic generalization of on-line learning and an application to boosting
- Arcing classifiers. (With discussion)
- Boosting the margin: a new explanation for the effectiveness of voting methods
- Additive logistic regression: a statistical view of boosting. (With discussion and a rejoinder by the authors)
- Empirical margin distributions and bounding the generalization error of combined classifiers
- Process consistency for AdaBoost.
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Boosting a weak learning algorithm by majority
- Boosting With theL2Loss
- Combinatorial methods in density estimation
- Logistic regression, AdaBoost and Bregman distances