Stochastic control up to a hitting time: optimality and rolling-horizon implementation

From MaRDI portal
Publication:6209953

arXiv0806.3008MaRDI QIDQ6209953FDOQ6209953


Authors: Debasish Chatterjee, Eugenio Cinquemani, Giorgos Chaloulos, John Lygeros Edit this on Wikidata


Publication date: 18 June 2008

Abstract: We present a dynamic programming-based solution to a stochastic optimal control problem up to a hitting time for a discrete-time Markov control process. Firstly, we determine an optimal control policy to steer the process toward a compact target set while simultaneously minimizing an expected discounted cost. We then provide a rolling-horizon strategy for approximating the optimal policy, together with quantitative characterization of its sub-optimality with respect to the optimal policy. Finally, we address related issues of asymptotic discount-optimality of the value-iteration policy. Both the state and action spaces are assumed to be Polish.













This page was built for publication: Stochastic control up to a hitting time: optimality and rolling-horizon implementation

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6209953)