A high-dimensional classification rule using sample covariance matrix equipped with adjusted estimated eigenvalues
From MaRDI portal
Publication:6541769
DOI10.1002/STA4.358MaRDI QIDQ6541769FDOQ6541769
Authors: Seungchul Baek, Hoyoung Park, Junyong Park
Publication date: 21 May 2024
Published in: Stat (Search for Journal in Brave)
Cites Work
- Regularized linear discriminant analysis and its application in microarrays
- Ridge Regression: Biased Estimation for Nonorthogonal Problems
- Prediction by Supervised Principal Components
- High-dimensional classification using features annealed independence rules
- A well-conditioned estimator for large-dimensional covariance matrices
- Some theory for Fisher's linear discriminant function, `naive Bayes', and some alternatives when there are many more variables than observations
- Sparse linear discriminant analysis by thresholding for high dimensional data
- Statistical challenges with high dimensionality: feature selection in knowledge discovery
- Estimation with quadratic loss.
- Title not available (Why is that?)
- A Road to Classification in High Dimensional Space: The Regularized Optimal Affine Discriminant
- Estimation of a covariance matrix under Stein's loss
- Lectures on the theory of estimation of many parameters
- Improved multivariate normal mean estimation with unknown covariance when \(p\) is greater than \(n\)
- Improved Stein-type shrinkage estimators for the high-dimensional multivariate normal covariance matrix
- Stein's Estimation Rule and Its Competitors--An Empirical Bayes Approach
- Some limit theorems for the eigenvalues of a sample covariance matrix
- Robust shrinkage estimation for elliptically symmetric distributions with unknown covariance matrix
- The maximal data piling direction for discrimination
- A survey of high dimension low sample size asymptotics
- Efficient quadratic regularization for expression arrays
- Regularization through variable selection and conditional MLE with application to classification in high dimensions
- The application of bias to discriminant analysis
- Minimax estimates of a normal mean vector for arbitrary quadratic loss and unknown covariance matrix
- Sparse HDLSS discrimination with constrained data piling
- Application of non parametric empirical Bayes estimation to high dimensional classification
This page was built for publication: A high-dimensional classification rule using sample covariance matrix equipped with adjusted estimated eigenvalues
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6541769)