Random forest kernel for high-dimension low sample size classification
From MaRDI portal
Publication:6089217
Recommendations
- Kernel naive Bayes discrimination for high-dimensional pattern recognition
- Random-projection ensemble classification. (With discussion).
- An efficient random forests algorithm for high dimensional data classification
- Random subspace learning (RASSEL) with data driven weighting schemes
- Support vector machine and its bias correction in high-dimension, low-sample-size settings
Cites work
- scientific article; zbMATH DE number 6860782 (Why is no real title available?)
- 10.1162/15324430260185592
- Bias-corrected support vector machine with Gaussian kernel in high-dimension, low-sample-size settings
- Distance metric learning for large margin nearest neighbor classification
- Distance-Weighted Discrimination
- Do we need hundreds of classifiers to solve real world classification problems?
- High dimensional data classification and feature selection using support vector machines
- High-dimensional data classification
- Kernel methods in machine learning
- On some transformations of high dimension, low sample size data for nearest neighbor classification
- Random forests
- Regularized linear discriminant analysis and its application in microarrays
- Scikit-learn: machine learning in Python
- Statistical comparisons of classifiers over multiple data sets
- Support-vector networks
- Weighted distance weighted discrimination and its asymptotic properties
This page was built for publication: Random forest kernel for high-dimension low sample size classification
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6089217)