On linear programming for constrained and unconstrained average-cost Markov decision processes with countable action spaces and strictly unbounded costs
DOI10.1287/MOOR.2021.1177zbMATH Open1489.90211arXiv1905.12095OpenAlexW2947183726MaRDI QIDQ5085149FDOQ5085149
Authors: Huizhen Yu
Publication date: 27 June 2022
Published in: Mathematics of Operations Research (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1905.12095
Recommendations
- scientific article; zbMATH DE number 1034051
- The LP approach in average reward MDPs with multiple cost constraints: The countable state case
- Linear programming formulation of MDPs in countable state space: The multichain case
- Linear Programming and Average Optimality of Markov Control Processes on Borel Spaces—Unbounded Costs
- Discounted cost Markov decision processes on Borel spaces: The linear programming formulation
constraintsdualityMarkov decision processesaverage costBorel state spaceinfinite-dimensional linear programscountable action spacemajorization conditionminimum pair
Linear programming (90C05) Optimality conditions and duality in mathematical programming (90C46) Markov and semi-Markov decision processes (90C40) Programming in abstract spaces (90C48) Optimal stochastic control (93E20)
Cites Work
- Markov Chains and Stochastic Stability
- Title not available (Why is that?)
- Title not available (Why is that?)
- Real Analysis and Probability
- Title not available (Why is that?)
- Title not available (Why is that?)
- Stochastic optimal control. The discrete time case
- Title not available (Why is that?)
- Title not available (Why is that?)
- Constrained Markov control processes in Borel spaces: the discounted case
- Constrained Average Cost Markov Control Processes in Borel Spaces
- Constrained markov decision processes with compact state and action spaces: the average case
- Title not available (Why is that?)
- Linear programming formulation of MDPs in countable state space: The multichain case
- Constrained Discounted Dynamic Programming
- Handbook of Markov decision processes. Methods and applications
- A convex analytic approach to Markov decision processes
- Markov chains and invariant probabilities
- Stable sequential control rules and Markov chains
- Linear Programming and Markov Decision Chains
- Title not available (Why is that?)
- On Linear Programming in a Markov Decision Problem
- Multichain Markov Renewal Programs
- Non-Existence of Everywhere Proper Conditional Distributions
- Title not available (Why is that?)
- Sample-path average optimality for Markov control processes
- Discretization and Weak Convergence in Markov Decision Drift Processes
- Average cost optimality inequality for Markov decision processes with Borel spaces and universally measurable policies
- Ergodic Control of Markov Chains with Constraints—the General Case
- Title not available (Why is that?)
- Linear Programming and Average Optimality of Markov Control Processes on Borel Spaces—Unbounded Costs
- Infinite Linear Programming and Multichain Markov Control Processes in Uncountable Spaces
- Sample path average optimality of Markov control processes with strictly unbounded cost
- LP Formulations of Discrete Time Long-Run Average Optimal Control Problems: The NonErgodic Case
- Average optimal stationary policies and linear programming in countable space Markov decision processes
- Duality theorem in Markovian decision problems
- The Existence of a Minimum Pair of State and Policy for Markov Decision Processes under the Hypothesis of Doeblin
- On the Minimum Pair Approach for Average Cost Markov Decision Processes with Countable Discrete Action Spaces and Strictly Unbounded Costs
- The LP approach in average reward MDPs with multiple cost constraints: The countable state case
Cited In (3)
This page was built for publication: On linear programming for constrained and unconstrained average-cost Markov decision processes with countable action spaces and strictly unbounded costs
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5085149)