On-line expectation-maximization algorithm for latent data models
From MaRDI portal
Abstract: In this contribution, we propose a generic online (also sometimes called adaptive or recursive) version of the Expectation-Maximisation (EM) algorithm applicable to latent variable models of independent observations. Compared to the algorithm of Titterington (1984), this approach is more directly connected to the usual EM algorithm and does not rely on integration with respect to the complete data distribution. The resulting algorithm is usually simpler and is shown to achieve convergence to the stationary points of the Kullback-Leibler divergence between the marginal distribution of the observation and the model distribution at the optimal rate, i.e., that of the maximum likelihood estimator. In addition, the proposed approach is also suitable for conditional (or regression) models, as illustrated in the case of the mixture of linear regressions model.
Recommendations
- Online expectation maximization based algorithms for inference in hidden Markov models
- Convergence of a particle-based approximation of the block online expectation maximization algorithm
- Online Learning with Hidden Markov Models
- Advances in Intelligent Data Analysis VI
- Online EM algorithm for mixture with application to Internet traffic modeling
Cites work
- scientific article; zbMATH DE number 3886919 (Why is no real title available?)
- scientific article; zbMATH DE number 3567782 (Why is no real title available?)
- scientific article; zbMATH DE number 739537 (Why is no real title available?)
- scientific article; zbMATH DE number 1059776 (Why is no real title available?)
- Acceleration of Stochastic Approximation by Averaging
- Almost sure convergence of Titterington's recursive estimator for mixture models
- Convergence rate and averaging of nonlinear two-time-scale stochastic approximation algo\-rithms
- New method of stochastic approximation type
- On the convergence properties of the EM algorithm
- Online EM algorithm for mixture with application to Internet traffic modeling
- Recursive EM and SAGE-inspired algorithms with application to DOA estimation
- Stability of Stochastic Approximation under Verifiable Conditions
- Statistical analysis of finite mixture distributions
- Stochastic approximation and its applications
- Tools for statistical inference. Methods for the exploration of posterior distributions and likelihood functions.
- Weak convergence rates for stochastic approximation with application to multiple targets and simulated annealing
Cited in
(60)- Nonparametric estimation of multivariate elliptic densities via finite mixture sieves
- On particle methods for parameter estimation in state-space models
- Simulation of foraging behavior using a decision-making agent with Bayesian and inverse Bayesian inference: temporal correlations and power laws in displacement patterns
- On-line EM Algorithm for the Normalized Gaussian Network
- Recursive online EM estimation of mixture autoregressions
- Emergence of optimal decoding of population codes through STDP
- Model-based clustering of high-dimensional data streams with online mixture of probabilistic PCA
- Inertial stochastic PALM and applications in machine learning
- Estimating multilevel models on data streams
- Online EM algorithm for mixture with application to Internet traffic modeling
- Online identification of time-delay jump Markov autoregressive exogenous systems with recursive expectation-maximization algorithm
- Recursive parameter estimation algorithm of the Dirichlet hidden Markov model
- Divide-and-conquer Bayesian inference in hidden Markov models
- Bag-of-components: an online algorithm for batch learning of mixture models
- Online learning with (multiple) kernels: a review
- Online learning of single- and multivalued functions with an infinite mixture of linear experts
- Online EM with weight-based forgetting
- Bayesian inference and online learning in Poisson neuronal networks
- A Clustered Gaussian Process Model for Computer Experiments
- Stochastic variable metric proximal gradient with variance reduction for non-convex composite optimization
- Dynamic Stochastic Blockmodel Regression for Network Data: Application to International Militarized Conflicts
- Online multi-label dependency topic models for text classification
- Online expectation maximization based algorithms for inference in hidden Markov models
- Convergence of a particle-based approximation of the block online expectation maximization algorithm
- Recent developments in expectation-maximization methods for analyzing complex data
- On-line EM variants for multivariate normal mixture model in background learning and moving foreground detection
- Scalable estimation strategies based on stochastic approximations: classical results and new insights
- Doubly-online changepoint detection for monitoring health status during sports activities
- An online expectation maximization algorithm for exploring general structure in massive networks
- Compressive statistical learning with random feature moments
- Online EM for functional data
- Multivariate online regression analysis with heterogeneous streaming data
- Posterior weighted reinforcement learning with state uncertainty
- Online but accurate inference for latent variable models with local Gibbs sampling
- Estimating random-intercept models on data streams
- Adaptive sequential Monte Carlo by means of mixture of experts
- Properties of the stochastic approximation EM algorithm with mini-batch sampling
- Statistical models for deformable templates in image and shape analysis
- Global implicit function theorems and the online expectation–maximisation algorithm
- Online model selection based on the variational Bayes
- Improving tree probability estimation with stochastic optimization and variance reduction
- Fast incremental expectation maximization for finite-sum optimization: nonasymptotic convergence
- Robust identification of linear ARX models with recursive EM algorithm based on Student's t-distribution
- Online algorithm for variance components estimation
- The limited-memory recursive variational Gaussian approximation (L-RVGA)
- Mini-batch learning of exponential family finite mixture models
- Online k-MLE for mixture modeling with exponential families
- An Asynchronous Distributed Expectation Maximization Algorithm for Massive Data: The DEM Algorithm
- Identifiability of discrete input–output hidden Markov models with external signals
- Online inference with multi-modal likelihood functions
- Advances in Intelligent Data Analysis VI
- Stochastic multichannel ranking with brain dynamics preferences
- Stream-suitable optimization algorithms for some soft-margin support vector machine variants
- Latent tree models for hierarchical topic detection
- Summary statistics and discrepancy measures for approximate Bayesian computation via surrogate posteriors
- Online Learning with Hidden Markov Models
- On-Line Inference for Hidden Markov Models via Particle Filters
- Efficient inference in state-space models through adaptive learning in online Monte Carlo expectation maximization
- Graph prototypical contrastive learning
- Improvements on scalable stochastic Bayesian inference methods for multivariate Hawkes process
This page was built for publication: On-line expectation-maximization algorithm for latent data models
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2920258)