Compactification methods in a control problem of jump processes under partial observations (Q1277214): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Created a new Item
 
Import240304020342 (talk | contribs)
Set profile property.
 
(3 intermediate revisions by 2 users not shown)
Property / reviewed by
 
Property / reviewed by: Alexander Yu. Veretennikov / rank
Normal rank
 
Property / reviewed by
 
Property / reviewed by: Alexander Yu. Veretennikov / rank
 
Normal rank
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
links / mardi / namelinks / mardi / name
 

Latest revision as of 02:47, 5 March 2024

scientific article
Language Label Description Also known as
English
Compactification methods in a control problem of jump processes under partial observations
scientific article

    Statements

    Compactification methods in a control problem of jump processes under partial observations (English)
    0 references
    0 references
    8 August 1999
    0 references
    A control problem for a partially observed Markov jump process is considered. The process \(X_t\) takes values in a finite set \(I\) and has got its transition intensities \(\lambda(t,x,x',u)\) where \(0\leq t\leq T\), \(x,x'\in I\), \(u\in A\), \(A\) is a compact metric space. The observation process \(Y_t\) satisfies the equation \[ dY_t = h(t,X_t) dt + dW_t, \quad 0\leq t\leq T, \quad Y_0=0 \] where \(h\) is known, \(W\) is a Wiener process. The problem is to minimize the cost function \[ J(s,p,u) = E\left[ \int_s^T c(t,X_t,u_t) dt + g(X_T) \right] \] under control \((u_t)\) with values from \(A\) which depends on observations at the moment \(t\) for any time \(t \in [s,T]\). Here \(0\leq s\leq T\) and \(p\) is a law of \(X_s\). The following main results are proved: the existence of an optimal randomized or relaxed control; the equivalence of minima of cost functions under randomized and non-randomized controls; a dynamic programming principle. Also, a separation problem is considered and the existence of a Markov optimal filter is established.
    0 references
    Markov jump process
    0 references
    filtering
    0 references
    control
    0 references
    dynamic programming
    0 references
    Markovian filter
    0 references
    partially observed process
    0 references
    relaxed control
    0 references
    randomized controls
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references