Variable selection for classification and regression in large p, small n problems
From MaRDI portal
Publication:5259031
Recommendations
Cited in
(17)- Random forest and variable importance rankings for correlated survival data, with applications to tooth loss
- A new variable selection approach using random forests
- Variable selection and importance in presence of high collinearity: an application to the prediction of lean body mass from multi-frequency bioelectrical impedance
- Nonparametric variable selection and classification: the CATCH algorithm
- Why significant variables aren't automatically good predictors
- Significance analysis for pairwise variable selection in classification
- Large numbers of explanatory variables: a probabilistic assessment
- Variable selection for binary classification in large dimensions: comparisons and application to microarray data
- Facilitating high-dimensional transparent classification via empirical Bayes variable selection
- Using random subspace method for prediction and variable importance assessment in linear regression
- Sparse partitioning: nonlinear regression with binary or tertiary predictors, with application to association studies
- Studying contributions of variables to classification
- Robust VIF regression with application to variable selection in large data sets
- Ensembling classification models based on phalanxes of variables with applications in drug discovery
- Optimal selection of sample-size dependent common subsets of covariates for multi-task regression prediction
- Canonical variates for recursive partitioning in data mining
- Performances of some high dimensional regression methods
This page was built for publication: Variable selection for classification and regression in large \(p\), small \(n\) problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5259031)