Error bounds for rolling horizon policies in discrete-time Markov control processes
From MaRDI portal
Publication:5202616
DOI10.1109/9.58554zbMath0724.93087OpenAlexW2152545378MaRDI QIDQ5202616
Onésimo Hernández-Lerma, Jean-Bernard Lasserre
Publication date: 1990
Published in: IEEE Transactions on Automatic Control (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1109/9.58554
Related Items (32)
LP based upper and lower bounds for Cesàro and Abel limits of the optimal values in problems of control of stochastic discrete time systems ⋮ Centralized systemic risk control in the interbank system: weak formulation and gamma-convergence ⋮ Modelling adherence behaviour for the treatment of obstructive sleep apnoea ⋮ Decentralized stochastic control ⋮ Minimizing capital injections by investment and reinsurance for a piecewise deterministic reserve process model ⋮ Hamilton-Jacobi-Bellman inequality for the average control of piecewise deterministic Markov processes ⋮ Characterization and computation of infinite-horizon specifications over Markov processes ⋮ Optimal dividend problems with a risk probability criterion ⋮ Certainty equivalent control of discrete time Markov processes with the average reward functional ⋮ Open Problem—Convergence and Asymptotic Optimality of the Relative Value Iteration in Ergodic Control ⋮ Approximate receding horizon approach for Markov decision processes: average reward case ⋮ Ramsey’s Discrete-Time Growth Model: A Markov Decision Approach with Stochastic Labor ⋮ Numerical analysis of generalised max-plus eigenvalue problems. ⋮ Anticipation of goals in automated planning ⋮ Quantitative model-checking of controlled discrete-time Markov processes ⋮ Controlled Markov decision processes with AVaR criteria for unbounded costs ⋮ Probabilistic Model Checking of Labelled Markov Processes via Finite Approximate Bisimulations ⋮ On repetitive control and the behaviour of a middle-aged consumer ⋮ Illustrated review of convergence conditions of the value iteration algorithm and the rolling horizon procedure for average-cost MDPs ⋮ Log-Optimal Portfolios with Memory Effect ⋮ Convex stochastic fluid programs with average cost. ⋮ Congestion-dependent pricing in a stochastic service system ⋮ Note on discounted continuous-time Markov decision processes with a lower bounding function ⋮ Robust optimal control using conditional risk mappings in infinite horizon ⋮ Stochastic output-feedback model predictive control ⋮ Convergence of Markov decision processes with constraints and state-action dependent discount factors ⋮ Inventory models with Markovian demands and cost functions of polynomial growth ⋮ Duality in optimal impulse control ⋮ Unnamed Item ⋮ Optimal maintenance strategies for systems with partial repair options and without assuming bounded costs ⋮ Spectral theorem for convex monotone homogeneous maps, and ergodic control ⋮ First passage Markov decision processes with constraints and varying discount factors
This page was built for publication: Error bounds for rolling horizon policies in discrete-time Markov control processes