Negative Dynamic Programming
From MaRDI portal
Publication:5521261
DOI10.1214/aoms/1177699369zbMath0144.43201OpenAlexW2062796590MaRDI QIDQ5521261
Publication date: 1966
Published in: The Annals of Mathematical Statistics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1214/aoms/1177699369
Related Items (97)
Sequential variable sampling plan for normal distribution ⋮ Existence of optimal stationary policies in discounted Markov decision processes: Approaches by occupation measures ⋮ On \(\epsilon\)-optimal continuous selectors and their application in discounted dynamic programming ⋮ Kolmogorov's Equations for Jump Markov Processes and Their Applications to Control Problems ⋮ The existence of good Markov strategies for decision processes with general payoffs ⋮ Non-randomized strategies in stochastic decision processes ⋮ Characterization and simplification of optimal strategies in positive stochastic games ⋮ A Mixed Value and Policy Iteration Method for Stochastic Control with Universally Measurable Policies ⋮ Single machine flow-time scheduling with a single breakdown ⋮ Finite state dynamic programming with the total reward criterion ⋮ Minimizing expected makespan in a two-machine stochastic open shop with Poisson arrival ⋮ Optimality in Feller semi-Markov control processes ⋮ Some basic concepts of numerical treatment of Markov decision models ⋮ Zero-sum stochastic games with unbounded costs: Discounted and average cost cases ⋮ On continuous dynamic programming with discrete time-parameter ⋮ Blackwell optimal policies in a Markov decision process with a Borel state space ⋮ Global asymptotic stability results for multisector models of optional growth under uncertainty when future utilities are discounted ⋮ Sufficiency of Markov Policies for Continuous-Time Jump Markov Decision Processes ⋮ Necessity of the terminal condition in the infinite horizon dynamic optimization problems with unbounded payoff ⋮ On a Continuously Discounted Vector Valued Markov Decision Process ⋮ Stochastic scheduling problems I — General strategies ⋮ The optimal frequency of information purchases ⋮ Measurable Gambling Houses ⋮ Regular Policies in Abstract Dynamic Programming ⋮ Stochastic games with metric state space ⋮ On optimality criteria for dynamic programs with long finite horizons ⋮ On theory and algorithms for Markov decision problems with the total reward criterion ⋮ Invariant problems in dynamic programming - average reward criterion ⋮ Finite-stage stochastic decision processes with recursive reward structure I: optimality equations and deterministic strategies ⋮ Positive zero-sum stochastic games with countable state and action spaces ⋮ On the terminal condition for the Bellman equation for dynamic optimization with an infinite horizon ⋮ Discounted dynamic programming with unbounded returns: application to economic models ⋮ Limit-optimal strategies in countable state decision problems ⋮ Bellman inequalities in markov decision deterministic drift processes ⋮ Two characterizations of optimality in dynamic programming ⋮ Average Cost Optimality Inequality for Markov Decision Processes with Borel Spaces and Universally Measurable Policies ⋮ Control: a perspective ⋮ Average cost Markov decision processes under the hypothesis of Doeblin ⋮ On variable discounting in dynamic programming: applications to resource extraction and other economic models ⋮ Conditions for optimality in dynamic programming and for the limit of n-stage optimal policies to be optimal ⋮ Controlling a biological invasion: a non-classical dynamic economic model ⋮ Unnamed Item ⋮ Equilibria in a two-species fishery ⋮ Quantitative model-checking of controlled discrete-time Markov processes ⋮ A limited order capacity stochastic inventory model with a fixed cost for order: The discounted case ⋮ Pseudopolynomial iterative algorithm to solve total-payoff games and min-cost reachability games ⋮ A note on negative dynamic programming for risk-sensitive control ⋮ Elementary results on solutions to the Bellman equation of dynamic programming: existence, uniqueness, and convergence ⋮ A linear-quadratic Gaussian approach to dynamic information acquisition ⋮ Semi-Markov decision processes with a reachable state-subset ⋮ Dynamic efficiency of conservation of renewable resources under uncertainty. ⋮ Markov decision processes associated with two threshold probability criteria ⋮ Stochastic games with unbounded payoffs: applications to robust control in economics ⋮ Compactness of the space of non-randomized policies in countable-state sequential decision processes ⋮ On the stability of a dynamic stochastic production and inventory system controlled by an optimal policy ⋮ Continuous versus measurable recourse in N-stage stochastic programming ⋮ Controlled jump processes ⋮ On Discrete-Time Dynamic Programming in Insurance: Exponential Utility and Minimizing the Ruin Probability ⋮ On dynamic programming: Compactness of the space of policies ⋮ OPTIMALITY OF FOUR-THRESHOLD POLICIES IN INVENTORY SYSTEMS WITH CUSTOMER RETURNS AND BORROWING/STORAGE OPTIONS ⋮ On stopped decision processes with discrete time parameter ⋮ Estimates for finite-stage dynamic programs ⋮ An analysis of transient Markov decision processes ⋮ Solving stochastic dynamic programming problems by linear programming — An annotated bibliography ⋮ Nonstationary discrete-time deterministic and stochastic control systems with infinite horizon ⋮ Generalised discounting in dynamic programming with unbounded returns ⋮ Multiple feedback at a single-server station ⋮ Stochastic evolution and control of an economic activity ⋮ Finitely Additive Dynamic Programming ⋮ On some aspects in stochastic dynamic programming with terminal region ⋮ Finite-stage reward functions having the Markov adequacy property ⋮ On stochastic games ⋮ On the convergence of successive approximations in dynamic programming with non-zero terminal reward ⋮ MDPs with setwise continuous transition probabilities ⋮ On stochastic games. II ⋮ Über ein stochastisches dynamisches entselieidungsmodell mit allgemeinen ertragsfunktionalen ⋮ Analysis for some properties of discrete time Markov decision processes ⋮ Constrained Markov Decision Processes with Expected Total Reward Criteria ⋮ On structural properties of optimal average cost functions in Markov decision processes with Borel spaces and universally measurable policies ⋮ Stopped decision processes on complete separable metric spaces ⋮ Perfect equilibrium in non-randomized strategies in a class of symmetric dynamic games ⋮ Optimal strategies for an inventory system with cost functions of general form ⋮ Instationäre dynamische Optimierung bei schwachen Voraussetzungen über die Gewinnfunktionen ⋮ Optimal Markov strategies ⋮ Optimal control of stationary Markov processes ⋮ Multiple objective nonatomic Markov decision processes with total reward criteria ⋮ Dynamic programming for non-additive stochastic objectives ⋮ Maximum-Stopping-Value Policies in Finite Markov Population Decision Chains ⋮ Modeling secrecy and deception in a multiple-period attacker-defender signaling game ⋮ Robust shortest path planning and semicontractive dynamic programming ⋮ \(K\) competing queues with customer abandonment: optimality of a generalised \(c \mu \)-rule by the smoothed rate truncation method ⋮ Stable Optimal Control and Semicontractive Dynamic Programming ⋮ MARKOV DECISION PROCESSES ⋮ Nonatomic total rewards Markov decision processes with multiple criteria ⋮ Finite state Markov decision models with average reward criteria ⋮ Stochastic scheduling problems II-set strategies- ⋮ Stationary policies and Markov policies in Borel dynamic programming
This page was built for publication: Negative Dynamic Programming