Average optimality for risk-sensitive control with general state space (Q2455059)

From MaRDI portal
Revision as of 10:31, 27 June 2024 by ReferenceBot (talk | contribs) (‎Changed an Item)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
scientific article
Language Label Description Also known as
English
Average optimality for risk-sensitive control with general state space
scientific article

    Statements

    Average optimality for risk-sensitive control with general state space (English)
    0 references
    0 references
    22 October 2007
    0 references
    A discrete-time Markov control process on a general state space is considered. The aim of the paper is to establish the optimality inequality for risk-sensitive dynamic programming and derive an optimal stationary policy. A similar result was obtained by Hernández-Hernández and Marcus under the assumption that there exists a stationary policy which induces a finite average cost that is equal some constant in each state. Here, instead of this assumption, the author assumes that a certain family of functions is bounded which makes the process reach ``good states'' sufficiently fast. For related papers see: [\textit{D. Hernández-Hernández} and \textit{S. I. Marcus}, Appl. Math. Optim. 40, 273--285 (1999; Zbl 0937.90115)].
    0 references
    risk-sensitive control
    0 references
    Borel state space
    0 references
    average cost optimality inequality
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references