Learning Topic Models: Identifiability and Finite-Sample Analysis
From MaRDI portal
Publication:6185581
DOI10.1080/01621459.2022.2089574arXiv2110.04232OpenAlexW3205960700MaRDI QIDQ6185581
Unnamed Author, Feng Liang, Yinyin Chen, Yun Yang
Publication date: 8 January 2024
Published in: Journal of the American Statistical Association (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2110.04232
maximum likelihoodidentifiabilitytopic modelsvolume minimizationfinite-sample analysissufficiently scattered
Cites Work
- Integrated likelihood methods for eliminating nuisance parameters. (With comments and a rejoinder).
- The stochastic EM algorithm: Estimation and asymptotic results
- Latent semantic indexing: A probabilistic analysis
- Convergence rates of latent topic models under relaxed identifiability conditions
- Estimating the endpoint of a distribution in the presence of additive observation errors
- Estimation of convex supports from noisy measurements
- Posterior contraction of the population polytope in finite admixture models
- A spectral algorithm for latent Dirichlet allocation
- Tensor decompositions for learning latent variable models
- Sparse Partially Collapsed MCMC for Parallel Inference in Topic Models
- On the complexity of four polyhedral set containment problems
- Non-Negative Matrix Factorization Revisited: Uniqueness and Algorithm for Symmetric Decomposition
- Blind Separation of Quasi-Stationary Sources: Exploiting Convex Geometry in Covariance Domain
- 10.1162/jmlr.2003.3.4-5.993
- Nonnegative Matrix Factorization Via Archetypal Analysis
- Spectral analysis of data
- Using mixture models for collaborative filtering
This page was built for publication: Learning Topic Models: Identifiability and Finite-Sample Analysis