Measuring the algorithmic convergence of randomized ensembles: the regression setting

From MaRDI portal
Publication:5037548




Abstract: When randomized ensemble methods such as bagging and random forests are implemented, a basic question arises: Is the ensemble large enough? In particular, the practitioner desires a rigorous guarantee that a given ensemble will perform nearly as well as an ideal infinite ensemble (trained on the same data). The purpose of the current paper is to develop a bootstrap method for solving this problem in the context of regression --- which complements our companion paper in the context of classification (Lopes 2019). In contrast to the classification setting, the current paper shows that theoretical guarantees for the proposed bootstrap can be established under much weaker assumptions. In addition, we illustrate the flexibility of the method by showing how it can be adapted to measure algorithmic convergence for variable selection. Lastly, we provide numerical results demonstrating that the method works well in a range of situations.



Cites work



Describes a project that uses

Uses Software





This page was built for publication: Measuring the algorithmic convergence of randomized ensembles: the regression setting

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5037548)