The adjusted Viterbi training for hidden Markov models (Q1002581): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Created a new Item
 
ReferenceBot (talk | contribs)
Changed an Item
 
(3 intermediate revisions by 3 users not shown)
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / arXiv ID
 
Property / arXiv ID: 0803.2394 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Statistical Inference for Probabilistic Functions of Finite State Markov Chains / rank
 
Normal rank
Property / cites work
 
Property / cites work: Markov Chains / rank
 
Normal rank
Property / cites work
 
Property / cites work: Asymptotic behaviour of classification maximum likelihood estimates / rank
 
Normal rank
Property / cites work
 
Property / cites work: Properties of the maximum a posteriori path estimator in hidden Markov models / rank
 
Normal rank
Property / cites work
 
Property / cites work: Convergence of the maximum a posteriori path estimator in hidden Markov models / rank
 
Normal rank
Property / cites work
 
Property / cites work: Inference in hidden Markov models. / rank
 
Normal rank
Property / cites work
 
Property / cites work: A classification EM algorithm for clustering and two stochastic versions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Biological Sequence Analysis / rank
 
Normal rank
Property / cites work
 
Property / cites work: Hidden Markov processes / rank
 
Normal rank
Property / cites work
 
Property / cites work: Model-Based Clustering, Discriminant Analysis, and Density Estimation / rank
 
Normal rank
Property / cites work
 
Property / cites work: Stochastic volatility models as hidden Markov models and statistical applications / rank
 
Normal rank
Property / cites work
 
Property / cites work: Hidden Markov Models and Disease Mapping / rank
 
Normal rank
Property / cites work
 
Property / cites work: The segmental K-means algorithm for estimating parameters of hidden Markov models / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4369222 / rank
 
Normal rank
Property / cites work
 
Property / cites work: On adjusted Viterbi training / rank
 
Normal rank
Property / cites work
 
Property / cites work: ADJUSTED VITERBI TRAINING / rank
 
Normal rank
Property / cites work
 
Property / cites work: Infinite Viterbi alignments in the two state hidden Markov models / rank
 
Normal rank
Property / cites work
 
Property / cites work: Maximum-likelihood estimation for hidden Markov models / rank
 
Normal rank
Property / cites work
 
Property / cites work: Multiresolution image classification by hierarchical modeling with two-dimensional hidden Markov models / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4160281 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4523870 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Hidden Markov Models for Speech Recognition / rank
 
Normal rank
Property / cites work
 
Property / cites work: Consistent and asymptotically normal parameter estimates for hidden Markov models / rank
 
Normal rank
Property / cites work
 
Property / cites work: Global convergence and empirical consistency of the generalized Lloyd algorithm / rank
 
Normal rank
Property / cites work
 
Property / cites work: Application of the Conditional Population-Mixture Model to Image Segmentation / rank
 
Normal rank
links / mardi / namelinks / mardi / name
 

Latest revision as of 03:12, 29 June 2024

scientific article
Language Label Description Also known as
English
The adjusted Viterbi training for hidden Markov models
scientific article

    Statements

    The adjusted Viterbi training for hidden Markov models (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    2 March 2009
    0 references
    The problem of effective parameter estimation of hidden Markov model \((X_n,Y_n)\), \(n=1,2,\dots\) is considered. Here \(Y_n\), \(n=1,2,\dots\) is unobservable time-homogeneous Markov chain with state space \(S=(1,\dots,K)\), transition matrix \(\mathbb{P}=(p_{ij})_{i,j \in S}\) and initial distribution \(\pi=\pi \mathbb{P}\). When \(Y_n\) is in state \(l\in S\) an observation \(x_n\) on \(X_n \in \mathbb{R}^D\) is emitted with distribution density \(f(x|\theta_l)\) which is known up to parametrization \(\theta_l \in \Theta \subseteq \mathbb{R}^d\) and independent of anything else. Given observations \(x_1,x_2,\dots,x_n\), the problem is to estimate the matrix \(\mathbb{P}\) and parameters \(\theta_1,\dots,\theta_K\). The principle tool is the EM procedure. Applications often use less computationally intensive Viterbi training procedure but it is biased and does not satisfy fixed point property (being initialized to the true parameters it can move away from initial points). In the article adjusted Viterbi training, a new method to restore the fixed point property is proposed. The advantages of this improved method are investigated analytically and by simulations.
    0 references
    0 references
    Baum-Welch
    0 references
    bias
    0 references
    computational efficiency
    0 references
    consistency
    0 references
    EM
    0 references
    hidden Markov models
    0 references
    maximum likelihood
    0 references
    parameter estimation
    0 references
    Viterbi extraction
    0 references
    Viterbi training
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references