The high-dimension, low-sample-size geometric representation holds under mild conditions

From MaRDI portal
Publication:5447664

DOI10.1093/biomet/asm050zbMath1135.62039OpenAlexW2018158023MaRDI QIDQ5447664

Keith E. Muller, Jeongyoun Ahn, Yueh-Yun Chi, James Stephen Marron

Publication date: 20 March 2008

Published in: Biometrika (Search for Journal in Brave)

Full work available at URL: https://semanticscholar.org/paper/51ff8a4cebbfa3b774e367a3cc105e44b37e792b



Related Items

Distance-based outlier detection for high dimension, low sample size data, Significance analysis of high-dimensional, low-sample size partially labeled data, Identification of consistent functional genetic modules, Perturbation theory for cross data matrix-based PCA, A generalized likelihood ratio test for normal mean when \(p\) is greater than \(n\), High dimensional asymptotics for the naive Hotelling T2 statistic in pattern recognition, Continuum directions for supervised dimension reduction, Statistical inference for high-dimension, low-sample-size data, A High-Dimensional Two-Sample Test for Non-Gaussian Data under a Strongly Spiked Eigenvalue Model, Correlation tests for high-dimensional data using extended cross-data-matrix methodology, PCA consistency for the power spiked model in high-dimensional settings, Asymptotics of hierarchical clustering for growing dimension, Inference on high-dimensional mean vectors with fewer observations than the dimension, A high dimensional dissimilarity measure, PCA and eigen-inference for a spiked covariance model with largest eigenvalues of same asymptotic order, Convergence and prediction of principal component scores in high-dimensional settings, Projection pursuit via white noise matrices, On the border of extreme and mild spiked models in the HDLSS framework, Asymptotic properties of the first principal component and equality tests of covariance matrices in high-dimension, low-sample-size context, Nonparametric classification of high dimensional observations, On the eigenstructure of covariance matrices with divergent spikes, Boundary behavior in high dimension, low sample size asymptotics of PCA, An algorithm for deciding the number of clusters and validation using simulated data with application to exploring crop population structure, A test of sphericity for high-dimensional data and its application for detection of divergently spiked noise, A survey of high dimension low sample size asymptotics, Binary discrimination methods for high-dimensional data with a geometric representation, Distance-based and RKHS-based dependence metrics in high dimension, Two-Step Hypothesis Testing When the Number of Variables Exceeds the Sample Size, Intrinsic Dimensionality Estimation of High-Dimension, Low Sample Size Data withD-Asymptotics, Discussion of: Treelets -- an adaptive multi-scale basis for sparse unordered data, Subspace rotations for high-dimensional outlier detection, Bias-corrected support vector machine with Gaussian kernel in high-dimension, low-sample-size settings, Effective PCA for high-dimension, low-sample-size data with singular value decomposition of cross data matrix, Projected principal component analysis in factor models, Unit canonical correlations and high-dimensional discriminant analysis, On asymptotic normality of cross data matrix-based PCA in high dimension low sample size, A distance-based, misclassification rate adjusted classifier for multiclass, high-dimensional data, Clustering by principal component analysis with Gaussian kernel in high-dimension, low-sample-size settings, Discussion on “Two-Stage Procedures for High-Dimensional Data” by Makoto Aoshima and Kazuyoshi Yata, PCA Consistency for Non-Gaussian Data in High Dimension, Low Sample Size Context, Effective PCA for high-dimension, low-sample-size data with noise reduction via geometric representations, Change-Point Detection of the Mean Vector with Fewer Observations than the Dimension Using Instantaneous Normal Random Projections, Two-Stage Procedures for High-Dimensional Data, Double data piling leads to perfect classification, More about asymptotic properties of some binary classification methods for high dimensional data, PCA consistency in high dimension, low sample size context, Solving the linear interval tolerance problem for weight initialization of neural networks, The remarkable simplicity of very high dimensional data: application of model-based clustering