Measuring the algorithmic convergence of randomized ensembles: the regression setting
DOI10.1137/20M1343300zbMATH Open1490.62161arXiv1908.01251OpenAlexW3092422740MaRDI QIDQ5037548FDOQ5037548
Authors: Miles E. Lopes, Suofei Wu, Thomas C. M. Lee
Publication date: 1 March 2022
Published in: SIAM Journal on Mathematics of Data Science (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1908.01251
Recommendations
- Estimating the algorithmic variance of randomized ensembles via the bootstrap
- Estimating a sharp convergence bound for randomized ensembles
- Standard errors for bagged and random forest estimators
- Quantifying uncertainty in random forests via confidence intervals and hypothesis tests
- How large should ensembles of classifiers be?
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Learning and adaptive systems in artificial intelligence (68T05) Nonparametric statistical resampling methods (62G09)
Cites Work
- The elements of statistical learning. Data mining, inference, and prediction
- BART: Bayesian additive regression trees
- Variable importance in binary regression trees and forests
- Consistency of random forests
- ggplot2. Elegant graphics for data analysis. With contributions by Carson Sievert
- Consistency of random forests and other averaging classifiers
- Title not available (Why is that?)
- Correlation and variable importance in random forests
- Random forests
- Bagging predictors
- Random Forests and Adaptive Nearest Neighbors
- Quantifying uncertainty in random forests via confidence intervals and hypothesis tests
- Random-projection ensemble classification. (With discussion).
- Analyzing bagging
- Title not available (Why is that?)
- Optimal weighted nearest neighbour classifiers
- Title not available (Why is that?)
- Title not available (Why is that?)
- On the asymptotics of random forests
- Analysis of a random forests model
- Extrapolation methods theory and practice
- Practical Extrapolation Methods
- Isoperimetry and integrability of the sum of independent Banach-space valued random variables
- Extrapolation and the bootstrap
- Richardson Extrapolation and the Bootstrap
- How large should ensembles of classifiers be?
- Comments on: ``A random forest guided tour
- Random Forests and Kernel Methods
- A bootstrap method for error estimation in randomized matrix multiplication
- Properties of Bagged Nearest Neighbour Classifiers
- Estimating the algorithmic variance of randomized ensembles via the bootstrap
- Standard errors for bagged and random forest estimators
- Bootstrapping max statistics in high dimensions: near-parametric rates under weak variance decay and application to functional and multinomial data
- Second-order properties of an extrapolated bootstrap without replacement under weak assumptions
- Extrapolation of subsampling distribution estimators: The i.i.d. and strong mixing cases
- Random rotation ensembles
- To tune or not to tune the number of trees in random forest
- Online bootstrap confidence intervals for the stochastic gradient descent estimator
- Scalable statistical inference for averaged implicit stochastic gradient descent
Cited In (5)
- Learning with mitigating random consistency from the accuracy measure
- Estimating a sharp convergence bound for randomized ensembles
- On a method for constructing ensembles of regression models
- Toward Efficient Ensemble Learning with Structure Constraints: Convergent Algorithms and Applications
- Estimating the algorithmic variance of randomized ensembles via the bootstrap
Uses Software
This page was built for publication: Measuring the algorithmic convergence of randomized ensembles: the regression setting
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5037548)