Analyticity, Convergence, and Convergence Rate of Recursive Maximum-Likelihood Estimation in Hidden Markov Models
From MaRDI portal
Publication:5281191
DOI10.1109/TIT.2010.2081110zbMath1366.62166arXiv0904.4264OpenAlexW1974006687MaRDI QIDQ5281191
Publication date: 27 July 2017
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/0904.4264
Asymptotic distribution theory in statistics (62E20) Markov processes: estimation; hidden Markov models (62M05) Detection theory in information and communication theory (94A13)
Related Items
Stability of optimal filter higher-order derivatives ⋮ Backward Importance Sampling for Online Estimation of State Space Models ⋮ Two-timescale stochastic gradient descent in continuous time with applications to joint online parameter estimation and optimal sensor placement ⋮ Asymptotic bias of stochastic gradient search ⋮ Online expectation maximization based algorithms for inference in hidden Markov models ⋮ Approximate, Computationally Efficient Online Learning in Bayesian Spiking Neurons ⋮ Convergence and convergence rate of stochastic gradient search in the case of multiple and non-isolated extrema ⋮ Gradient free parameter estimation for hidden Markov models with intractable likelihoods ⋮ Particle-based online estimation of tangent filters with application to parameter estimation in nonlinear state-space models ⋮ Recursive estimation of multivariate hidden Markov model parameters ⋮ Joint Online Parameter Estimation and Optimal Sensor Placement for the Partially Observed Stochastic Advection-Diffusion Equation ⋮ Robustness to incorrect models and data-driven learning in average-cost optimal stochastic control