Optimal control of continuous-time Markov chains with noise-free observation
From MaRDI portal
Publication:4563379
Abstract: We consider an infinite horizon optimal control problem for a continuous-time Markov chain in a finite set with noise-free partial observation. The observation process is defined as , , where is a given map defined on . The observation is noise-free in the sense that the only source of randomness is the process itself. The aim is to minimize a discounted cost functional and study the associated value function . After transforming the control problem with partial observation into one with complete observation (the separated problem) using filtering equations, we provide a link between the value function associated to the latter control problem and the original value function . Then, we present two different characterizations of (and indirectly of ): on one hand as the unique fixed point of a suitably defined contraction mapping and on the other hand as the unique constrained viscosity solution (in the sense of Soner) of a HJB integro-differential equation. Under suitable assumptions, we finally prove the existence of an optimal control.
Recommendations
- Stochastic filtering and optimal control of pure jump Markov processes with noise-free partial observation
- A partially observed control problem for Markov chains
- Optimal control of partially observable piecewise deterministic Markov processes
- Ergodic control of partially observed Markov chains
- The mean squared loss control problem for a partially observed Markov chain
Cites work
- scientific article; zbMATH DE number 425394 (Why is no real title available?)
- scientific article; zbMATH DE number 3961484 (Why is no real title available?)
- scientific article; zbMATH DE number 3678487 (Why is no real title available?)
- scientific article; zbMATH DE number 17494 (Why is no real title available?)
- scientific article; zbMATH DE number 1254171 (Why is no real title available?)
- scientific article; zbMATH DE number 722978 (Why is no real title available?)
- scientific article; zbMATH DE number 3798532 (Why is no real title available?)
- A dynamic programming algorithm for the optimal control of piecewise deterministic Markov processes
- An introduction to stochastic filtering theory.
- Applied Probability and Queues
- Constrained BSDEs driven by a non-quasi-left-continuous random measure and optimal control of PDMPs on bounded domains
- Constrained and unconstrained optimal discounted control of piecewise deterministic Markov processes
- Continuous average control of piecewise deterministic Markov processes
- Controlled Jump Markov Models
- Controlled Markov processes and viscosity solutions
- Filtering for nonlinear systems driven by nonwhite noises:an approximation scheme
- Filtering of continuous-time Markov chains with noise-free observation and applications
- Functional analysis, Sobolev spaces and partial differential equations
- Least-squares state estimation of systems with state-dependent observation noise
- Markov Chains
- Multivariate point processes: predictable projection, Radon-Nikodym derivatives, representation of martingales
- Necessary and sufficient optimality conditions for control of piecewise deterministic markov processes
- Nonlinear filtering with signal dependent observation noise
- On Reducing a Jump Controllable Markov Model to a Model with Discrete Time
- On the optimal control of partially observed inventory systems
- Optimal Control with State-Space Constraint I
- Optimal Control with State-Space Constraint. II
- Optimal control of piecewise deterministic Markov processes: a BSDE representation of the value function
- Optimal control of piecewise deterministic markov process
- Point processes and queues. Martingale dynamics
- Some topological properties of vector measures with bounded variation and its applications
- Stochastic optimal control. The discrete time case
- Viscosity solutions of Hamilton-Jacobi equations
Cited in
(5)- State constrained control problems in Banach lattices and applications
- Stochastic filtering of a pure jump process with predictable jumps and path-dependent local characteristics
- Stochastic filtering and optimal control of pure jump Markov processes with noise-free partial observation
- Filtering method for linear and non-linear stochastic optimal control of partially observable systems. II
- Nonlinear filtering of partially observed systems arising in singular stochastic optimal control
This page was built for publication: Optimal control of continuous-time Markov chains with noise-free observation
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4563379)