Estimating the algorithmic variance of randomized ensembles via the bootstrap
DOI10.1214/18-AOS1707zbMATH Open1415.62045arXiv1907.08742MaRDI QIDQ666594FDOQ666594
Authors: Miles E. Lopes
Publication date: 6 March 2019
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1907.08742
Recommendations
- Estimating a sharp convergence bound for randomized ensembles
- Bootstrap bias corrections for ensemble methods
- Measuring the algorithmic convergence of randomized ensembles: the regression setting
- Computationally efficient double bootstrap variance estimation
- Computation of Exact Bootstrap Confidence Intervals: Complexity and Deterministic Algorithms
- scientific article; zbMATH DE number 5233718
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Learning and adaptive systems in artificial intelligence (68T05) Nonparametric statistical resampling methods (62G09) Prediction theory (aspects of stochastic processes) (60G25) Randomized algorithms (68W20)
Cites Work
- The elements of statistical learning. Data mining, inference, and prediction
- Weak convergence and empirical processes. With applications to statistics
- Consistency of random forests
- Consistency of random forests and other averaging classifiers
- Title not available (Why is that?)
- Random forests
- Bagging predictors
- Random Forests and Adaptive Nearest Neighbors
- Quantifying uncertainty in random forests via confidence intervals and hypothesis tests
- Foundations of Modern Probability
- Random-projection ensemble classification. (With discussion).
- Analyzing bagging
- Title not available (Why is that?)
- Estimation and accuracy after model selection
- Sample size selection in optimization methods for machine learning
- Title not available (Why is that?)
- Title not available (Why is that?)
- Condition. The geometry of numerical algorithms
- Title not available (Why is that?)
- On the asymptotics of random forests
- Analysis of a random forests model
- Extrapolation methods theory and practice
- Boosting. Foundations and algorithms.
- Practical Extrapolation Methods
- Richardson Extrapolation and the Bootstrap
- How large should ensembles of classifiers be?
- Comments on: ``A random forest guided tour
- Random Forests and Kernel Methods
- Variance reduction in purely random forests
- A bootstrap method for error estimation in randomized matrix multiplication
- Properties of Bagged Nearest Neighbour Classifiers
- Title not available (Why is that?)
- Estimating the algorithmic variance of randomized ensembles via the bootstrap
- Standard errors for bagged and random forest estimators
Cited In (11)
- Bootstrapping the operator norm in high dimensions: error estimation for covariance matrices and sketching
- Randomized numerical linear algebra: Foundations and algorithms
- A bootstrap method for error estimation in randomized matrix multiplication
- How large should ensembles of classifiers be?
- Estimating a sharp convergence bound for randomized ensembles
- Title not available (Why is that?)
- Title not available (Why is that?)
- Measuring the algorithmic convergence of randomized ensembles: the regression setting
- Title not available (Why is that?)
- Estimating the algorithmic variance of randomized ensembles via the bootstrap
- Standard errors for bagged and random forest estimators
Uses Software
This page was built for publication: Estimating the algorithmic variance of randomized ensembles via the bootstrap
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q666594)