Dynamic programming for ergodic control with partial observations. (Q2574544): Difference between revisions

From MaRDI portal
RedirectionBot (talk | contribs)
Changed an Item
ReferenceBot (talk | contribs)
Changed an Item
 
(One intermediate revision by one other user not shown)
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / cites work
 
Property / cites work: Bounds for the fundamental solution of a parabolic equation / rank
 
Normal rank
Property / cites work
 
Property / cites work: A New Approach to the Limit Theory of Recurrent Markov Chains / rank
 
Normal rank
Property / cites work
 
Property / cites work: Occupation measures for controlled Markov processes: Characterization and optimality / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5560061 / rank
 
Normal rank
Property / cites work
 
Property / cites work: A remark on the attainable distributions of controlled diffusions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3995082 / rank
 
Normal rank
Property / cites work
 
Property / cites work: White-Noise Representations in Stochastic Realization Theory / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4858374 / rank
 
Normal rank
Property / cites work
 
Property / cites work: The value function in ergodic control of diffusion processes with partial observations / rank
 
Normal rank
Property / cites work
 
Property / cites work: Average Cost Dynamic Programming Equations For Controlled Markov Chains With Partial Observations / rank
 
Normal rank
Property / cites work
 
Property / cites work: The value function in ergodic control of diffusion processes with partial observations II / rank
 
Normal rank
Property / cites work
 
Property / cites work: Dynamic Programming Conditions for Partially Observable Stochastic Systems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Optimal Control for Partially Observed Diffusions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Mimicking the one-dimensional marginal distributions of processes having an Ito differential / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4255598 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3959169 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5562267 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Markov chains and stochastic stability / rank
 
Normal rank
Property / cites work
 
Property / cites work: A splitting technique for Harris recurrent Markov chains / rank
 
Normal rank
Property / cites work
 
Property / cites work: Necessary and Sufficient Dynamic Programming Conditions for Continuous Time Stochastic Optimal Control / rank
 
Normal rank
Property / cites work
 
Property / cites work: Martingale conditions for the optimal control of continuous time stochastic systems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Survey of Measurable Selection Theorems / rank
 
Normal rank

Latest revision as of 12:29, 11 June 2024

scientific article
Language Label Description Also known as
English
Dynamic programming for ergodic control with partial observations.
scientific article

    Statements

    Dynamic programming for ergodic control with partial observations. (English)
    0 references
    0 references
    29 November 2005
    0 references
    The paper derives a dynamic programming principle for optimal control of a partially observed Markov process taking values in a Euclidean space. The minimized functional is that of average (ergodic) costs over infinite horizon. The control space is compact. The problem is addressed by approximating the original ergodic cost functional by a family of discounted cost functionals with discount factors converging to unity. The dynamic programming principle inequalities are first derived in discrete time and the result is then carried over to partially observed Markov semimartingales in continuous time. The construction of optimal controls proceeds in the following steps: 1. restating the problem by means of a separation principle which makes the control process adapted to the process of observations, 2. changing the probability measure in order to eliminate variability in the marginal distribution of the observation process, 3. introducing a stability assumption for the state process in a Lyapunov function form, 4. embedding the state process into another one with a ``doubled'' range of values, for which an accessible atom exists. The argument draws on earlier results of the same author concerning optimal ergodic control of partially observed finite Markov chains.
    0 references
    Markov process
    0 references
    ergodic cost
    0 references
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references