CTMDP and its relationship with DTMDP (Q914561): Difference between revisions
From MaRDI portal
Created a new Item |
Added link to MaRDI item. |
||
links / mardi / name | links / mardi / name | ||
Revision as of 17:08, 30 January 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | CTMDP and its relationship with DTMDP |
scientific article |
Statements
CTMDP and its relationship with DTMDP (English)
0 references
1990
0 references
The discounted continuous time Markov decision problem (shortly, CTMDP) discussed here is of countable state space, non-empty action sets and non-uniformly bounded transition rates. Under weak conditions we proved that the discounted CTMDP is equivalent to a discounted discrete time Markov decision problem with discount factor having the form \(\beta\) (i) in the following meanings: i) their optimality equations are equal to each other: ii) their discounted objectives are equal to each other in the stochastic stationary policy set \(\Pi_ s\). Here, sup \(\beta\) (i) may be equal to 1. But when the transition rate q(j\(| i,a)\) of CTMDP is uniformly bounded, \(\beta\) (i) can be taken as a constant. By this equivalence most results known for discounted DTMDP can be easily generalized to discounted CTMDP. In addition a previous result in proving the optimality equation of discounted CTMDP is corrected.
0 references
discounted continuous time Markov decision
0 references
countable state space
0 references
discount factor
0 references
optimality equation
0 references