Double data piling leads to perfect classification
DOI10.1214/21-EJS1945zbMATH Open1493.62365OpenAlexW4206194242MaRDI QIDQ2074331FDOQ2074331
Jeongyoun Ahn, Sungkyu Jung, Woonyoung Chang
Publication date: 9 February 2022
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://projecteuclid.org/journals/electronic-journal-of-statistics/volume-15/issue-2/Double-data-piling-leads-to-perfect-classification/10.1214/21-EJS1945.full
Factor analysis and principal components; correspondence analysis (62H25) Classification and discrimination; cluster analysis (statistical aspects) (62H30) Ridge regression; shrinkage estimators (Lasso) (62J07)
Cites Work
- Title not available (Why is that?)
- Regularized linear discriminant analysis and its application in microarrays
- Determining the Number of Factors in Approximate Factor Models
- On the number of principal components in high dimensions
- PCA consistency in high dimension, low sample size context
- Basic properties of strong mixing conditions. A survey and some open questions
- Weighted Distance Weighted Discrimination and Its Asymptotic Properties
- Geometric Representation of High Dimension, Low Sample Size Data
- The high-dimension, low-sample-size geometric representation holds under mild conditions
- On Strong Mixing Conditions for Stationary Gaussian Processes
- Achieving near Perfect Classification for Functional Data
- Effective PCA for high-dimension, low-sample-size data with noise reduction via geometric representations
- The maximal data piling direction for discrimination
- A survey of high dimension low sample size asymptotics
- Geometric consistency of principal component scores for high‐dimensional mixture models and its application
- The application of bias to discriminant analysis
- Estimation of the number of spikes, possibly equal, in the high-dimensional case
- Continuum directions for supervised dimension reduction
- Benign overfitting in linear regression
- Distance-based classifier by data transformation for high-dimension, strongly spiked eigenvalue models
- Robust high-dimensional factor models with applications to statistical machine learning
- When and Why are Principal Component Scores a Good Tool for Visualizing High‐dimensional Data?
- Inference on high-dimensional mean vectors under the strongly spiked eigenvalue model
Cited In (1)
Uses Software
This page was built for publication: Double data piling leads to perfect classification
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2074331)