The expected total cost criterion for Markov decision processes under constraints: a convex analytic approach
DOI10.1239/AAP/1346955264zbMATH Open1286.90161OpenAlexW2028198084MaRDI QIDQ3167338FDOQ3167338
Authors: Masayuki Horiguchi, F. Dufour, A. B. Piunovskiy
Publication date: 2 November 2012
Published in: Advances in Applied Probability (Search for Journal in Brave)
Full work available at URL: https://projecteuclid.org/euclid.aap/1346955264
Recommendations
- The expected total cost criterion for Markov decision processes under constraints
- scientific article; zbMATH DE number 3891132
- Constrained Markov decision processes with expected total reward criteria
- A convex programming approach for discrete-time Markov decision processes under the expected total reward criterion
- A convex analytic approach to Markov decision processes
Applications of mathematical programming (90C90) Markov chains (discrete-time Markov processes on discrete state spaces) (60J10) Markov and semi-Markov decision processes (90C40)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Stochastic optimal control. The discrete time case
- Average Optimality in Markov Control Processes via Discounted-Cost Problems and Linear Programming
- Title not available (Why is that?)
- Markov decision processes with applications to finance.
- On dynamic programming: Compactness of the space of policies
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Markov decision processes with a stopping time constraint
- Stopped Markov decision processes with multiple constraints
- Multiobjective stopping problem for discrete-time Markov processes: convex analytic approach
Cited In (20)
- Linear programming approach to optimal impulse control problems with functional constraints
- Aggregated occupation measures and linear programming approach to constrained impulse control problems
- A convex programming approach for discrete-time Markov decision processes under the expected total reward criterion
- Multiobjective stopping problem for discrete-time Markov processes: convex analytic approach
- Extreme Occupation Measures in Markov Decision Processes with an Absorbing State
- Title not available (Why is that?)
- Impulsive control for continuous-time Markov decision processes: a linear programming approach
- A linear programming formulation for constrained discounted continuous control for piecewise deterministic Markov processes
- On convergence of value iteration for a class of total cost Markov decision processes
- Note on discounted continuous-time Markov decision processes with a lower bounding function
- Maximizing the probability of visiting a set infinitely often for a Markov decision process with Borel state and action spaces
- Sufficiency of deterministic policies for atomless discounted and uniformly absorbing MDPs with multiple criteria
- Constrained Markov decision processes with total cost criteria: Lagrangian approach and dual linear program
- Constrained Markov control processes with randomized discounted cost criteria: occupation measures and extremal points
- Duality in optimal impulse control
- Convex analytic approach to constrained discounted Markov decision processes with non-constant discount factors
- The expected total cost criterion for Markov decision processes under constraints
- Constrained Markov decision processes with expected total reward criteria
- Convex analytic method revisited: further optimality results and performance of deterministic policies in average cost stochastic control
- A convex analytic approach to Markov decision processes
This page was built for publication: The expected total cost criterion for Markov decision processes under constraints: a convex analytic approach
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3167338)