Linear optimal prediction and innovations representations of hidden Markov models. (Q2574606)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Linear optimal prediction and innovations representations of hidden Markov models.
scientific article

    Statements

    Linear optimal prediction and innovations representations of hidden Markov models. (English)
    0 references
    0 references
    0 references
    0 references
    29 November 2005
    0 references
    Let \(\{x_t\}\) be an unobservable Markov chain with states \(e_1, \dots , e_n\) and \(\{y_t\}\) an observable process which takes values \(d_1, \dots , d_{\ell }\). Assume that given \(\{x_t\}\), \(\{y_t\}\) is a sequence of conditionally independent random variables and that the conditional distribution of \(y_t\) depends on \(x_t\) only. Then we have hidden a Markov model. Define \(a_{ij}=P(x_{t+1}=e_i\mid x_t=e_j)\), \(c_{ij}=P(y_{t+1}=d_i\mid x_t=e_j)\), \(A=(a_{ij})\), \(C=(c_{ij})\). The authors transform the state space system \(x_{t+1}=A x_t + \xi _{t+1}\), \(y_{t+1}=C x_t + \eta _{t+1}\) to innovations representation, which is a recursive representation of the optimal linear predictor. The derivation requires the solution of algebraic Riccati equations under non-minimality assumptions. Two numerical examples with \(n=\ell =2\) are presented. It is shown that the optimal predictor and optimal linear predictor perform similarly. The optimal linear predictor has a smaller computational advantage.
    0 references
    hidden Markov model
    0 references
    innovations representation
    0 references
    Kalman filter
    0 references
    non-minimality
    0 references
    prediction error representation
    0 references
    Riccati equation
    0 references
    0 references

    Identifiers