Simplicial and minimal-variance distances in multivariate data analysis
From MaRDI portal
Publication:2074654
DOI10.1007/S42519-021-00227-7zbMath1484.62062OpenAlexW4206490122MaRDI QIDQ2074654
Emily O'Riordan, Anatoly A. Zhigljavsky, Jonathan Gillard
Publication date: 10 February 2022
Published in: Journal of Statistical Theory and Practice (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s42519-021-00227-7
Asymptotic properties of parametric estimators (62F12) Computational methods for problems pertaining to statistics (62-08) Estimation in multivariate analysis (62H12) Ridge regression; shrinkage estimators (Lasso) (62J07)
Related Items (1)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Sparse inverse covariance estimation with the graphical lasso
- A well-conditioned estimator for large-dimensional covariance matrices
- Spectral analysis of the Moore-Penrose inverse of a large dimensional sample covariance matrix
- Estimation of high-dimensional prior and posterior covariance matrices in Kalman filter vari\-ants
- Improved Stein-type shrinkage estimators for the high-dimensional multivariate normal covariance matrix
- Learning a Mahalanobis distance metric for data clustering and classification
- Partial identification and inference in censored quantile regression
- Simplicial variances, potentials and Mahalanobis distances
- On \(K\)-means algorithm with the use of Mahalanobis distances
- Regularized estimation of large covariance matrices
- High-dimensional graphs and variable selection with the Lasso
- Stability of feature selection in classification issues for high-dimensional correlated data
- A Constrainedℓ1Minimization Approach to Sparse Precision Matrix Estimation
- Introduction to Information Retrieval
- First-Order Methods for Sparse Covariance Selection
- Some properties of incomplete U-statistics
- Multi-Target Shrinkage Estimation for Covariance Matrices
- Least squares quantization in PCM
- A survey on unsupervised outlier detection in high‐dimensional numerical data
- Mahalanobis distance informed by clustering
- An improved modified cholesky decomposition approach for precision matrix estimation
- Foundations of Data Science
- Linear Models in Statistics
- Condition-Number-Regularized Covariance Estimation
This page was built for publication: Simplicial and minimal-variance distances in multivariate data analysis