Optimal estimation and computational limit of low-rank Gaussian mixtures
From MaRDI portal
Publication:6172192
DOI10.1214/23-aos2264arXiv2201.09040OpenAlexW4380490505MaRDI QIDQ6172192
Publication date: 19 July 2023
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2201.09040
Parametric inference under constraints (62F30) Minimax procedures in statistical decision theory (62C20)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Upper bounds on product and multiplier empirical processes
- Convergence rates of parameter estimation for some weakly identifiable finite mixtures
- Statistical guarantees for the EM algorithm: from population to sample-based analysis
- A tail inequality for suprema of unbounded empirical processes with applications to Markov chains
- Consistent estimation of a mixing distribution
- Estimating a network from multiple noisy realizations
- Rate-optimal perturbation bounds for singular subspaces with applications to high-dimensional statistics
- Rates of convergence for the Gaussian mixture sieve.
- Entropies and rates of convergence for maximum likelihood and Bayes estimation for mixtures of normal densities.
- Optimal rate of convergence for finite mixture models
- Weak convergence and empirical processes. With applications to statistics
- Optimality of spectral clustering in the Gaussian mixture model
- Community detection on mixture multilayer networks via regularized tensor decomposition
- Notes on computational hardness of hypothesis testing: predictions using the low-degree likelihood ratio
- Randomly initialized EM algorithm for two-component Gaussian mixture achieves near optimality in \(O(\sqrt{n})\) iterations
- Spectral and matrix factorization methods for consistent community detection in multi-layer networks
- Optimal estimation of Gaussian mixtures via denoised method of moments
- Normal approximation and confidence region of singular subspaces
- Hellinger-consistency of certain nonparametric maximum likelihood estimators
- CHIME: clustering of high-dimensional Gaussian mixtures with EM algorithm and its optimality
- Common and individual structure of brain networks
- Statistically optimal and computationally efficient low rank tensor completion from noisy entries
- Efficiently learning mixtures of two Gaussians
- Learning Mixtures of Gaussians in High Dimensions
- Perturbation of Linear Forms of Singular Vectors Under Gaussian Noise
- Volume Ratio, Sparsity, and Minimaxity Under Unitarily Invariant Norms
- The Optimal Hard Threshold for Singular Values is <inline-formula> <tex-math notation="TeX">\(4/\sqrt {3}\) </tex-math></inline-formula>
- Tensor SVD: Statistical and Computational Limits
- A non asymptotic penalized criterion for Gaussian mixture model selection
- Learning Mixtures of Low-Rank Models
- Tackling Small Eigen-Gaps: Fine-Grained Eigenvector Estimation and Inference Under Heteroscedastic Noise
- Learning mixtures of arbitrary gaussians
- Dynamic Tensor Clustering
- The Sup-norm Perturbation of HOSVD and Low Rank Tensor Denoising
- Computationally efficient sparse clustering
- On strong identifiability and convergence rates of parameter estimation in finite mixtures
- Optimal estimation of high-dimensional Gaussian location mixtures
- Regularized matrix data clustering and its application to image analysis
- A Doubly Enhanced EM Algorithm for Model-Based Tensor Clustering