On Finding Optimal Policies in Discrete Dynamic Programming with No Discounting

From MaRDI portal
Publication:5528345

DOI10.1214/aoms/1177699272zbMath0149.16301OpenAlexW2142032013WikidataQ114846481 ScholiaQ114846481MaRDI QIDQ5528345

Arthur F. jun. Veinott

Publication date: 1966

Published in: The Annals of Mathematical Statistics (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1214/aoms/1177699272




Related Items (46)

Index-based policies for discounted multi-armed bandits on parallel machines.OPTIMALITY OF TRUNK RESERVATION FOR AN M/M/K/N QUEUE WITH SEVERAL CUSTOMER TYPES AND HOLDING COSTSOn the existence of relative values for undiscounted Markovian decision processes with a scalar gain rateBias optimality and strong \(n\) \((n= -1,0)\) discount optimality for Markov decision processesOptimality equations and sensitive optimality in bounded Markov decision processes1A Mixed Value and Policy Iteration Method for Stochastic Control with Universally Measurable PoliciesStrong 1-optimal stationary policies in denumerable Markov decision processesBlackwell optimal policies in a Markov decision process with a Borel state spaceA new algorithm for a multi-item periodic review inventory systemTurnpike theorems for Markov gamesGeneralized Markovian decision processesSurvey of linear programming for standard and nonstandard Markovian control problems. Part I: TheorySymblicit algorithms for mean-payoff and shortest path in monotonic Markov decision processesStrong 0-discount optimal policies in a Markov decision process with a Borel state spaceA value-iteration scheme for undiscounted multichain Markov renewal programsStrong \(n\)-discount and finite-horizon optimality for continuous-time Markov decision processesOn optimality criteria for dynamic programs with long finite horizonsAn axiomatic approach to Markov decision processesA value iteration method for undiscounted multichain Markov decision processesUnnamed ItemAnother Set of Conditions for Strongn(n = −1, 0) Discount Optimality in Markov Decision ProcessesPolicy improvement for perfect information additive reward and additive transition stochastic games with discounted and average payoffsSample-path optimality and variance-maximization for Markov decision processesReview of a Markov decision algorithm for optimal inspections and revisions in a maintenance system with partial informationPlanning for the long run: programming with patient, Pareto responsive preferencesSolution procedures for multi-objective markov decision processesA unified approach to Markov decision problems and performance sensitivity analysis with discounted and average criteria: multichain casesOn the existence of relative values for undiscounted multichain Markov decision processesThe vanishing discount approach to constrained continuous-time controlled Markov chainsAn optimality principle for Markovian decision processesSingularly perturbed linear programs and Markov decision processesA survey of recent results on continuous-time Markov decision processes (with comments and rejoinder)Unnamed ItemFinite state continuous time Markov decision processes with an infinite planning horizonLinear programming considerations on Markovian decision processes with no discountingOn direct sums of Markovian decision processOn the set of optimal policies in discrete dynamic programmingOn a set of optimal policies in continuous time Markovian decision problemA new optimality criterion for discrete dynamic programmingBias optimality for multichain continuous-time Markov decision processesFinite state multi-armed bandit problems: Sensitive-discount, average-reward and average-overtaking optimalityMaximum-Stopping-Value Policies in Finite Markov Population Decision ChainsUnnamed ItemMARKOV DECISION PROCESSESThe variational calculus and approximation in policy space for Markovian decision processesDecentralized evolutionary mechanisms for intertemporal economies: A possibility result




This page was built for publication: On Finding Optimal Policies in Discrete Dynamic Programming with No Discounting