Correlation and variable importance in random forests
From MaRDI portal
Abstract: This paper is about variable selection with the random forests algorithm in presence of correlated predictors. In high-dimensional regression or classification frameworks, variable selection is a difficult task, that becomes even more challenging in the presence of highly correlated predictors. Firstly we provide a theoretical study of the permutation importance measure for an additive regression model. This allows us to describe how the correlation between predictors impacts the permutation importance. Our results motivate the use of the Recursive Feature Elimination (RFE) algorithm for variable selection in this context. This algorithm recursively eliminates the variables using permutation importance measure as a ranking criterion. Next various simulation experiments illustrate the efficiency of the RFE algorithm for selecting a small number of variables together with a good prediction error. Finally, this selection algorithm is tested on the Landsat Satellite data from the UCI Machine Learning Repository.
Recommendations
Cites work
- scientific article; zbMATH DE number 3860199 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- 10.1162/153244303322753616
- 10.1162/153244303322753643
- 10.1162/153244303322753706
- 10.1162/153244303322753715
- A new variable selection approach using random forests
- Bagging predictors
- Consistency of random forests
- Consistency of random forests and other averaging classifiers
- Correlated variables in regression: clustering and sparse estimation
- Empirical characterization of random forest variable importance measures
- Gene selection for cancer classification using support vector machines
- Linear Statistical Inference and its Applications
- Random forests
- Reinforcement learning trees
- Robust \(H_{\infty }\) control for nonlinear uncertain stochastic T-S fuzzy systems with time delays
- Selection bias in gene extraction on the basis of microarray gene-expression data
- Selection of relevant features and examples in machine learning
- Stability Selection
- Variable importance in binary regression trees and forests
- Variable selection in kernel Fisher discriminant analysis by means of recursive feature elimina\-tion
- Variable selection in model-based discriminant analysis
- Wrappers for feature subset selection
Cited in
(27)- Random forest-based approach for physiological functional variable selection for driver's stress level classification
- Grouped variable importance with random forests and application to multiple functional data analysis
- Measuring regional effects of model inputs with random Forest
- A new variable selection approach using random forests
- Variable importance in binary regression trees and forests
- Variable selection and importance in presence of high collinearity: an application to the prediction of lean body mass from multi-frequency bioelectrical impedance
- Variable selection by random forests using data with missing values
- Health care fraud classifiers in practice
- Trade-off between predictive performance and FDR control for high-dimensional Gaussian model selection
- A random forest guided tour
- Comments on ``Data science, big data and statistics
- Information criteria for model selection
- Forward variable selection for random forest models
- moreparty
- Ordinal trees and random forests: score-free recursive partitioning and improved ensembles
- Kernel-based measure of variable importance for genetic association studies
- Standard errors and confidence intervals for variable importance in random forest regression, classification, and survival
- A Study of Strength and Correlation in Random Forests
- Trees, forests, and impurity-based variable importance in regression
- Consistent and unbiased variable selection under indepedent features using random forest permutation importance
- Efficient permutation testing of variable importance measures by the example of random forests
- Empirical characterization of random forest variable importance measures
- Measuring the algorithmic convergence of randomized ensembles: the regression setting
- Random forest for ordinal responses: prediction and variable selection
- A computationally fast variable importance test for random forests for high-dimensional data
- Understanding complex predictive models with ghost variables
- All models are wrong, but many are useful: learning a variable's importance by studying an entire class of prediction models simultaneously
This page was built for publication: Correlation and variable importance in random forests
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q58729)