CTMDP and its relationship with DTMDP (Q914561): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Created a new Item
 
Import240304020342 (talk | contribs)
Set profile property.
 
(One intermediate revision by one other user not shown)
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
links / mardi / namelinks / mardi / name
 

Latest revision as of 01:35, 5 March 2024

scientific article
Language Label Description Also known as
English
CTMDP and its relationship with DTMDP
scientific article

    Statements

    CTMDP and its relationship with DTMDP (English)
    0 references
    0 references
    1990
    0 references
    The discounted continuous time Markov decision problem (shortly, CTMDP) discussed here is of countable state space, non-empty action sets and non-uniformly bounded transition rates. Under weak conditions we proved that the discounted CTMDP is equivalent to a discounted discrete time Markov decision problem with discount factor having the form \(\beta\) (i) in the following meanings: i) their optimality equations are equal to each other: ii) their discounted objectives are equal to each other in the stochastic stationary policy set \(\Pi_ s\). Here, sup \(\beta\) (i) may be equal to 1. But when the transition rate q(j\(| i,a)\) of CTMDP is uniformly bounded, \(\beta\) (i) can be taken as a constant. By this equivalence most results known for discounted DTMDP can be easily generalized to discounted CTMDP. In addition a previous result in proving the optimality equation of discounted CTMDP is corrected.
    0 references
    discounted continuous time Markov decision
    0 references
    countable state space
    0 references
    discount factor
    0 references
    optimality equation
    0 references

    Identifiers