Distance-based classifier by data transformation for high-dimension, strongly spiked eigenvalue models
From MaRDI portal
Publication:2000734
Abstract: We consider classifiers for high-dimensional data under the strongly spiked eigenvalue (SSE) model. We first show that high-dimensional data often have the SSE model. We consider a distance-based classifier using eigenstructures for the SSE model. We apply the noise reduction methodology to estimation of the eigenvalues and eigenvectors in the SSE model. We create a new distance-based classifier by transforming data from the SSE model to the non-SSE model. We give simulation studies and discuss the performance of the new classifier. Finally, we demonstrate the new classifier by using microarray data sets.
Recommendations
- A classifier under the strongly spiked eigenvalue model in high-dimension, low-sample-size context
- A quadratic classifier for high-dimension, low-sample-size data under the strongly spiked eigenvalue model
- Geometric classifiers for high-dimensional noisy data
- Two-sample tests for high-dimension, strongly spiked eigenvalue models
- Inference on high-dimensional mean vectors under the strongly spiked eigenvalue model
Cites work
- scientific article; zbMATH DE number 889593 (Why is no real title available?)
- A direct estimation approach to sparse linear discriminant analysis
- A distance-based, misclassification rate adjusted classifier for multiclass, high-dimensional data
- A two-sample test for high-dimensional data with applications to gene-set testing
- Asymptotic properties of the misclassification rates for Euclidean distance discriminant rule in high-dimensional data
- Comparison of Discrimination Methods for the Classification of Tumors Using Gene Expression Data
- Dependent central limit theorems and invariance principles
- Distance-Weighted Discrimination
- Effective PCA for high-dimension, low-sample-size data with noise reduction via geometric representations
- Effective PCA for high-dimension, low-sample-size data with singular value decomposition of cross data matrix
- Geometric Representation of High Dimension, Low Sample Size Data
- Geometric classifier for multiclass, high-dimensional data
- High-dimensional classification using features annealed independence rules
- Scale adjustments for classifiers in high-dimensional, low sample size settings
- Some theory for Fisher's linear discriminant function, `naive Bayes', and some alternatives when there are many more variables than observations
- Sparse Quadratic Discriminant Analysis For High Dimensional Data
- Sparse linear discriminant analysis by thresholding for high dimensional data
- Support vector machine and its bias correction in high-dimension, low-sample-size settings
- The maximal data piling direction for discrimination
- Theoretical Measures of Relative Performance of Classifiers for High Dimensional Data with Small Sample Sizes
- Two-sample tests for high-dimension, strongly spiked eigenvalue models
- Two-stage procedures for high-dimensional data
Cited in
(22)- Geometric classifiers for high-dimensional noisy data
- A quadratic classifier for high-dimension, low-sample-size data under the strongly spiked eigenvalue model
- Consistent variable selection criteria in multivariate linear regression even when dimension exceeds sample size
- Hypothesis tests for high-dimensional covariance structures
- Kick-one-out-based variable selection method for Euclidean distance-based classifier in high-dimensional settings
- Statistical inference under the strongly spiked eigenvalue model
- scientific article; zbMATH DE number 7387552 (Why is no real title available?)
- Inference on high-dimensional mean vectors under the strongly spiked eigenvalue model
- The robust nearest shrunken centroids classifier for high-dimensional heavy-tailed data
- Median-based classifiers for high-dimensional data
- Bias-corrected support vector machine with Gaussian kernel in high-dimension, low-sample-size settings
- Simultaneous testing of the mean vector and covariance matrix among k populations for high-dimensional data
- On some transformations of high dimension, low sample size data for nearest neighbor classification
- Predictive performances of implicitly and explicitly robust classifiers on high dimensional data
- Double data piling leads to perfect classification
- Clustering by principal component analysis with Gaussian kernel in high-dimension, low-sample-size settings
- Asymptotic properties of distance-weighted discrimination and its bias correction for high-dimension, low-sample-size data
- A classifier under the strongly spiked eigenvalue model in high-dimension, low-sample-size context
- Semiparametric estimation of the high-dimensional elliptical distribution
- Asymptotic properties of multiclass support vector machine under high dimensional settings
- Robust support vector machine for high-dimensional imbalanced data
- Test for high-dimensional outliers with principal component analysis
This page was built for publication: Distance-based classifier by data transformation for high-dimension, strongly spiked eigenvalue models
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2000734)