Necessary and sufficient conditions for a bounded solution to the optimality equation in average reward Markov decision chains (Q1103532)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Necessary and sufficient conditions for a bounded solution to the optimality equation in average reward Markov decision chains
scientific article

    Statements

    Necessary and sufficient conditions for a bounded solution to the optimality equation in average reward Markov decision chains (English)
    0 references
    1988
    0 references
    Consider a discrete time Markov decision process with countable state space S. In addition to the standard assumptions of compact action sets and continuous transition probabilities, suppose that the Markov chain determined by each stationary policy f has a single positive recurrent class R(f), which is entered with probability one and which contains at least one member of a fixed, finite subset G of S. The main theorem gives, under these assumptions, five necessary and sufficient conditions (including a simultaneous Doeblin condition with set G) for the average reward optimality equation to have a bounded measurable solution for an arbitrary bounded measurable reward function. The establishment of necessity is an uncommon feature; sufficient conditions are discussed in \textit{L. C. Thomas} [``Connectedness conditions for denumerable state Markov decision processes'', in: Recent developments in Markov decision processes, R. Hartley, L. C. Thomas, D. J. White (eds.), Academic Press (1980; Zbl 0547.90064)].
    0 references
    0 references
    optimal stationary policies
    0 references
    discrete time Markov decision process
    0 references
    countable state space
    0 references
    simultaneous Doeblin condition
    0 references
    average reward optimality equation
    0 references
    bounded measurable reward function
    0 references
    0 references