Continuum directions for supervised dimension reduction
From MaRDI portal
dimension reductionprincipal component analysislinear discriminant analysishigh-dimensioncontinuum regressionlow-sample-size (HDLSS)maximum data piling
Computational methods for problems pertaining to statistics (62-08) Factor analysis and principal components; correspondence analysis (62H25) Classification and discrimination; cluster analysis (statistical aspects) (62H30) Applications of statistics to biology and medical sciences; meta analysis (62P10)
Abstract: Dimension reduction of multivariate data supervised by auxiliary information is considered. A series of basis for dimension reduction is obtained as minimizers of a novel criterion. The proposed method is akin to continuum regression, and the resulting basis is called continuum directions. With a presence of binary supervision data, these directions continuously bridge the principal component, mean difference and linear discriminant directions, thus ranging from unsupervised to fully supervised dimension reduction. High-dimensional asymptotic studies of continuum directions for binary supervision reveal several interesting facts. The conditions under which the sample continuum directions are inconsistent, but their classification performance is good, are specified. While the proposed method can be directly used for binary and multi-category classification, its generalizations to incorporate any form of auxiliary data are also presented. The proposed method enjoys fast computation, and the performance is better or on par with more computer-intensive alternatives.
Recommendations
Cites work
- scientific article; zbMATH DE number 4163945 (Why is no real title available?)
- scientific article; zbMATH DE number 3744343 (Why is no real title available?)
- scientific article; zbMATH DE number 472964 (Why is no real title available?)
- scientific article; zbMATH DE number 1391397 (Why is no real title available?)
- A Generalized View on Continuum Regression
- A direct estimation approach to sparse linear discriminant analysis
- A road to classification in high dimensional space: the regularized optimal affine discriminant
- A survey of high dimension low sample size asymptotics
- Boundary behavior in high dimension, low sample size asymptotics of PCA
- Clustering high dimension, low sample size data using the maximal data piling distance
- Distance-Weighted Discrimination
- Efficient semiparametric estimation of the Fama-French model and extensions
- Envelope models for parsimonious and efficient multivariate linear regression
- Envelopes and Partial Least Squares Regression
- Extensions of sparse canonical correlation analysis with applications to genomic data
- Geometric Representation of High Dimension, Low Sample Size Data
- High dimension low sample size asymptotics of robust PCA
- High-dimensional classification using features annealed independence rules
- Joint and individual variation explained (JIVE) for integrated analysis of multiple data types
- PCA Consistency for Non-Gaussian Data in High Dimension, Low Sample Size Context
- PCA consistency in high dimension, low sample size context
- Projected principal component analysis in factor models
- Reduced-rank regression for the multivariate linear model
- Some theory for Fisher's linear discriminant function, `naive Bayes', and some alternatives when there are many more variables than observations
- Sparse linear discriminant analysis by thresholding for high dimensional data
- Sufficient Dimension Reduction via Inverse Regression
- Supervised singular value decomposition and its asymptotic properties
- The high-dimension, low-sample-size geometric representation holds under mild conditions
- The maximal data piling direction for discrimination
- Weighted distance weighted discrimination and its asymptotic properties
Cited in
(7)- Supervised dimension reduction of intrinsically low-dimensional data
- scientific article; zbMATH DE number 7376763 (Why is no real title available?)
- Unsupervised dimensionality reduction versus supervised regularization for classification from sparse data
- Functional continuum regression
- Double data piling: a high-dimensional solution for asymptotically perfect multi-category classification
- scientific article; zbMATH DE number 5957198 (Why is no real title available?)
- Double data piling leads to perfect classification
This page was built for publication: Continuum directions for supervised dimension reduction
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1662921)