Abstract: We study modifications of the Viterbi Training (VT) algorithm to estimate emission parameters in Hidden Markov Models (HMM) in general, and in mixure models in particular. Motivated by applications of VT to HMM that are used in speech recognition, natural language modeling, image analysis, and bioinformatics, we investigate a possibility of alleviating the inconsistency of VT while controlling the amount of extra computations. Specifically, we propose to enable VT to asymptotically fix the true values of the parameters as does the EM algorithm. This relies on infinite Viterbi alignment and an associated with it limiting probability distribution. This paper, however, focuses on mixture models, an important case of HMM, wherein the limiting distribution can always be computed exactly; finding such limiting distribution for general HMM presents a more challenging task under our ongoing investigation. A simulation of a univariate Gaussian mixture shows that our central algorithm (VA1) can dramatically improve accuracy without much cost in computation time. We also present VA2, a more mathematically advanced correction to VT, verify by simulation its fast convergence and high accuracy; its computational feasibility remains to be investigated in future work.
Recommendations
Cites work
- scientific article; zbMATH DE number 3594513 (Why is no real title available?)
- scientific article; zbMATH DE number 1222687 (Why is no real title available?)
- scientific article; zbMATH DE number 1099037 (Why is no real title available?)
- scientific article; zbMATH DE number 765034 (Why is no real title available?)
- A Lagrangian formulation of Zador's entropy-constrained quantization theorem
- A classification EM algorithm for clustering and two stochastic versions
- A computationally efficient approach to the estimation of two- and three-dimensional hidden Markov models
- ADJUSTED VITERBI TRAINING
- Application of the Conditional Population-Mixture Model to Image Segmentation
- Asymptotic Statistics
- Asymptotic behaviour of classification maximum likelihood estimates
- Biological Sequence Analysis
- Consistent and asymptotically normal parameter estimates for hidden Markov models
- Convergence of the maximum a posteriori path estimator in hidden Markov models
- Finite mixture models
- Global convergence and empirical consistency of the generalized Lloyd algorithm
- Hidden Markov Models for Speech Recognition
- Hidden Markov processes
- Inference in hidden Markov models.
- Maximum-likelihood estimation for hidden Markov models
- Model-Based Clustering, Discriminant Analysis, and Density Estimation
- Multiresolution image classification by hierarchical modeling with two-dimensional hidden Markov models
- Properties of the maximum a posteriori path estimator in hidden Markov models
- Statistical Inference for Probabilistic Functions of Finite State Markov Chains
- Stochastic volatility models as hidden Markov models and statistical applications
- The segmental K-means algorithm for estimating parameters of hidden Markov models
Cited in
(9)- ADJUSTED VITERBI TRAINING
- Existence of infinite Viterbi path for pairwise Markov models
- Estimation of Viterbi path in Bayesian hidden Markov models
- The adjusted Viterbi training for hidden Markov models
- Asymptotic risks of Viterbi segmentation
- Handling ties correctly and efficiently in Viterbi training using the Viterbi semiring
- Viterbi training in PRISM
- Regenerativity of Viterbi process for pairwise Markov models
- MAP segmentation in Bayesian hidden Markov models: a case study
This page was built for publication: On adjusted Viterbi training
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q996733)