Notes on average Markov decision processes with a minimum-variance criterion (Q1612012)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Notes on average Markov decision processes with a minimum-variance criterion |
scientific article |
Statements
Notes on average Markov decision processes with a minimum-variance criterion (English)
0 references
28 August 2002
0 references
In Markov decision processes (here with countable state and action spaces), one of the main objectives is the average reward per unit of time, the expectation of which is to be maximized. For a risk-aversing decision-maker, an optimal policy under this objective may have an unacceptably high variance. So the variance minimization became more and more interesting for research. The author carefully analyses two relevant papers by \textit{M. Kurano} [J. Math. Anal. Appl. 123, 572--583 (1987; Zbl 0619.90080)] and \textit{X. Guo} [Math. Meth. Oper. Res. 49, 87--96 (1999; Zbl 1016.90071)], and detected mistakes in the proofs of the main theorems so that they appeared as not yet proved. Using a slightly modified variance criterion and postulating a mild condition, the author proves the existence of a Markov policy which is \(\varepsilon\)-strong variance optimal for any \(\varepsilon>0\).
0 references
Markov decision process
0 references
average criterion
0 references
variance minimization
0 references
\(\varepsilon\)-strong variance optimal policy
0 references
0 references
0 references