A precise high-dimensional asymptotic theory for boosting and minimum-_1-norm interpolated classifiers
DOI10.1214/22-AOS2170zbMATH Open1490.68188arXiv2002.01586OpenAlexW4283031248WikidataQ114060459 ScholiaQ114060459MaRDI QIDQ2148995FDOQ2148995
Publication date: 24 June 2022
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2002.01586
Recommendations
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Learning and adaptive systems in artificial intelligence (68T05)
Cites Work
- Greedy function approximation: A gradient boosting machine.
- On the existence of maximum likelihood estimates in logistic regression models
- Boosting the margin: a new explanation for the effectiveness of voting methods
- Additive logistic regression: a statistical view of boosting. (With discussion and a rejoinder by the authors)
- Boosting for high-dimensional linear models
- Boosting algorithms: regularization, prediction and model fitting
- Boosting in the Presence of Outliers: Adaptive Classification With Nonconvex Loss Functions
- Boosting with early stopping: convergence and consistency
- Title not available (Why is that?)
- Arcing classifiers. (With discussion)
- Boosting With theL2Loss
- Prediction Games and Arcing Algorithms
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Soft margins for AdaBoost
- Empirical margin distributions and bounding the generalization error of combined classifiers
- Enumeration of Seven-Argument Threshold Functions
- Some inequalities for Gaussian processes and applications
- Boosting a weak learning algorithm by majority
- Title not available (Why is that?)
- High dimensional robust M-estimation: asymptotic variance via approximate message passing
- A note on A. Albert and J. A. Anderson's conditions for the existence of maximum likelihood estimates in logistic regression models
- Rigorous solution of the Gardner problem
- On robust regression with high-dimensional predictors
- On the Bayes-risk consistency of regularized boosting methods.
- Title not available (Why is that?)
- Population theory for boosting ensembles.
- Process consistency for AdaBoost.
- Analysis of boosting algorithms using the smooth margin function
- Complexities of convex combinations and bounding the generalization error in classification
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- 10.1162/1532443041424319
- Precise Error Analysis of Regularized <inline-formula> <tex-math notation="LaTeX">$M$ </tex-math> </inline-formula>-Estimators in High Dimensions
- The space of interactions in neural network models
- On the impact of predictor geometry on the performance on high-dimensional ridge-regularized generalized robust regression estimators
- A new perspective on boosting in linear regression via subgradient optimization and relatives
- Just interpolate: kernel ``ridgeless regression can generalize
- Title not available (Why is that?)
- Reconciling modern machine-learning practice and the classical bias–variance trade-off
- Breaking the Curse of Dimensionality with Convex Neural Networks
- Title not available (Why is that?)
- Learning Theory
- Title not available (Why is that?)
- On the existence of linear weak learners and applications to boosting
- The Rate of Convergence of AdaBoost
- On the equivalence of weak learnability and linear separability: new relaxations and efficient boosting algorithms
- Benign overfitting in linear regression
- The likelihood ratio test in high-dimensional logistic regression is asymptotically a rescaled Chi-square
- A modern maximum-likelihood theory for high-dimensional logistic regression
- The phase transition for the existence of the maximum likelihood estimate in high-dimensional logistic regression
- Surprises in high-dimensional ridgeless least squares interpolation
- Two Models of Double Descent for Weak Features
- Title not available (Why is that?)
- The Generalization Error of Random Features Regression: Precise Asymptotics and the Double Descent Curve
- Title not available (Why is that?)
- The asymptotic distribution of the MLE in high-dimensional logistic models: arbitrary covariance
- Which bridge estimator is the best for variable selection?
- Recovering Structured Signals in Noise: Least-Squares Meets Compressed Sensing
- A Unifying Tutorial on Approximate Message Passing
- Training Neural Networks as Learning Data-adaptive Kernels: Provable Representation and Approximation Benefits
Cited In (9)
- A Unifying Tutorial on Approximate Message Passing
- Sharp global convergence guarantees for iterative nonconvex optimization with random data
- Mehler’s Formula, Branching Process, and Compositional Kernels of Deep Neural Networks
- Universality of regularized regression estimators in high dimensions
- Noisy linear inverse problems under convex constraints: exact risk asymptotics in high dimensions
- Title not available (Why is that?)
- The curse of overparametrization in adversarial training: precise analysis of robust generalization for random features regression
- On the robustness of minimum norm interpolators and regularized empirical risk minimizers
- AdaBoost and robust one-bit compressed sensing
Uses Software
This page was built for publication: A precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiers
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2148995)