scientific article; zbMATH DE number 1014727
From MaRDI portal
Publication:4338419
Cites work
- scientific article; zbMATH DE number 3823489 (Why is no real title available?)
- scientific article; zbMATH DE number 3748909 (Why is no real title available?)
- scientific article; zbMATH DE number 3505708 (Why is no real title available?)
- scientific article; zbMATH DE number 3628142 (Why is no real title available?)
- scientific article; zbMATH DE number 826383 (Why is no real title available?)
- Average optimal policies in Markov decision drift processes with applications to a queueing and a replacement model
- Discrete Approximation of Continuous Time Stochastic Control Systems
- Discretization and Weak Convergence in Markov Decision Drift Processes
- Extensions of Trotter's operator semigroup approximation theorems
- Impulsive and continuously acting control of jump processes-time discretization
- Markov Decision Drift Processes; Conditions for Optimality Obtained by Discretization
- Necessary and Sufficient Dynamic Programming Conditions for Continuous Time Stochastic Optimal Control
- Numerical solution of partial differential equations. Transl. from the German by Peter R. Wadsack
- On the Convergence of the Discrete Time Dynamic Programming Equation for General Semigroups
- On the Optimality of $(s,S)$-Policies in Continuous Review Inventory Models
- On the finite horizon Bellman equation for controlled Markov jump models with unbounded characteristics: Existence and approximation
- Optimal control of the service rate in an M/G/1 queueing system
- Probability methods for approximations in stochastic control and for elliptic equations
- Survey of the stability of linear finite difference equations
Cited in
(3)
This page was built for publication:
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4338419)