Continuous time markov decision processes with interventions
DOI10.1080/17442508308833256zbMath0498.90081OpenAlexW2040986715MaRDI QIDQ3964342
Publication date: 1983
Published in: Stochastics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/17442508308833256
jumpsoptimality conditionsstochastic dynamic programminginterventionsimpulsive controloptimal policycontinuous-time Markov decision processesfinite action spacesvalue determinationfinite state spacesdiscounted total rewardBorel measurable modelscontinuously and impulsively acting decisionshistory depending policiesinequalities of quasi-variational typeundiscounted average reward
Stochastic programming (90C15) Dynamic programming (90C39) Markov and semi-Markov decision processes (90C40)
Related Items
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Probability methods for approximations in stochastic control and for elliptic equations
- Stochastic optimal control. The discrete time case
- Controlled jump processes
- Continuous time control of Markov processes on an arbitrary state space: average return criterion
- Continuous time Markovian decision processes average return criterion
- Finite state continuous time Markov decision processes with an infinite planning horizon
- Generalized semi-Markov decision processes
- On the Optimal Impulse Control Problem for Degenerate Diffusions
- A general markov decision method I: Model and techniques
- Nondiscounted Continuous Time Markovian Decision Process with Countable State Space
- Discrete Dynamic Programming
- Finite State Continuous Time Markov Decision Processes with a Finite Planning Horizon
- Multichain Markov Renewal Programs
- Continuously Discounted Markov Decision Model with Countable State and Action Space
- Potentials for denumerable Markov chains