Error bounds for rolling horizon policies in discrete-time Markov control processes

From MaRDI portal
Publication:5202616

DOI10.1109/9.58554zbMath0724.93087OpenAlexW2152545378MaRDI QIDQ5202616

Onésimo Hernández-Lerma, Jean-Bernard Lasserre

Publication date: 1990

Published in: IEEE Transactions on Automatic Control (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1109/9.58554




Related Items (32)

LP based upper and lower bounds for Cesàro and Abel limits of the optimal values in problems of control of stochastic discrete time systemsCentralized systemic risk control in the interbank system: weak formulation and gamma-convergenceModelling adherence behaviour for the treatment of obstructive sleep apnoeaDecentralized stochastic controlMinimizing capital injections by investment and reinsurance for a piecewise deterministic reserve process modelHamilton-Jacobi-Bellman inequality for the average control of piecewise deterministic Markov processesCharacterization and computation of infinite-horizon specifications over Markov processesOptimal dividend problems with a risk probability criterionCertainty equivalent control of discrete time Markov processes with the average reward functionalOpen Problem—Convergence and Asymptotic Optimality of the Relative Value Iteration in Ergodic ControlApproximate receding horizon approach for Markov decision processes: average reward caseRamsey’s Discrete-Time Growth Model: A Markov Decision Approach with Stochastic LaborNumerical analysis of generalised max-plus eigenvalue problems.Anticipation of goals in automated planningQuantitative model-checking of controlled discrete-time Markov processesControlled Markov decision processes with AVaR criteria for unbounded costsProbabilistic Model Checking of Labelled Markov Processes via Finite Approximate BisimulationsOn repetitive control and the behaviour of a middle-aged consumerIllustrated review of convergence conditions of the value iteration algorithm and the rolling horizon procedure for average-cost MDPsLog-Optimal Portfolios with Memory EffectConvex stochastic fluid programs with average cost.Congestion-dependent pricing in a stochastic service systemNote on discounted continuous-time Markov decision processes with a lower bounding functionRobust optimal control using conditional risk mappings in infinite horizonStochastic output-feedback model predictive controlConvergence of Markov decision processes with constraints and state-action dependent discount factorsInventory models with Markovian demands and cost functions of polynomial growthDuality in optimal impulse controlUnnamed ItemOptimal maintenance strategies for systems with partial repair options and without assuming bounded costsSpectral theorem for convex monotone homogeneous maps, and ergodic controlFirst passage Markov decision processes with constraints and varying discount factors




This page was built for publication: Error bounds for rolling horizon policies in discrete-time Markov control processes