Fast incremental expectation maximization for finite-sum optimization: nonasymptotic convergence
From MaRDI portal
Publication:2058782
DOI10.1007/s11222-021-10023-9zbMath1475.62032arXiv2012.14670OpenAlexW3115503773MaRDI QIDQ2058782
Gersende Fort, P. Gach, Eric Moulines
Publication date: 9 December 2021
Published in: Statistics and Computing (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2012.14670
large scale learningcomputational statistical learningfinite-sum optimizationincremental expectation maximization algorithmmomentum stochastic approximation
Computational methods for problems pertaining to statistics (62-08) Numerical optimization and variational techniques (65K10) Learning and adaptive systems in artificial intelligence (68T05)
Related Items
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Minimizing finite sums with the stochastic average gradient
- On the convergence properties of the EM algorithm
- Convergence of the Monte Carlo expectation maximization for curved exponential families.
- Convergence of a stochastic approximation version of the EM algorithm
- Mini-batch learning of exponential family finite mixture models
- Keeping the balance -- bridge sampling for marginal likelihood estimation in finite mixture, mixture of experts and Markov mixture models
- MM Optimization Algorithms
- On-Line Expectation–Maximization Algorithm for latent Data Models
- The EM Algorithm and Extensions, 2E
- Some NP-complete problems in quadratic and nonlinear programming
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Non-Linear Programming Via Penalty Functions
- Statistical Modelling by Exponential Families
- A Stochastic Approximation Method