Approximate Q Learning for Controlled Diffusion Processes and Its Near Optimality

From MaRDI portal
Publication:6136230

DOI10.1137/22M1484201zbMATH Open1521.93214arXiv2203.07499OpenAlexW4385162439MaRDI QIDQ6136230FDOQ6136230


Authors: Erhan Bayraktar, Ali Devran Kara Edit this on Wikidata


Publication date: 29 August 2023

Published in: SIAM Journal on Mathematics of Data Science (Search for Journal in Brave)

Abstract: We study a Q learning algorithm for continuous time stochastic control problems. The proposed algorithm uses the sampled state process by discretizing the state and control action spaces under piece-wise constant control processes. We show that the algorithm converges to the optimality equation of a finite Markov decision process (MDP). Using this MDP model, we provide an upper bound for the approximation error for the optimal value function of the continuous time control problem. Furthermore, we present provable upper-bounds for the performance loss of the learned control process compared to the optimal admissible control process of the original problem. The provided error upper-bounds are functions of the time and space discretization parameters, and they reveal the effect of different levels of the approximation: (i) approximation of the continuous time control problem by an MDP, (ii) use of piece-wise constant control processes, (iii) space discretization. Finally, we state a time complexity bound for the proposed algorithm as a function of the time and space discretization parameters.


Full work available at URL: https://arxiv.org/abs/2203.07499




Recommendations




Cites Work


Cited In (10)





This page was built for publication: Approximate Q Learning for Controlled Diffusion Processes and Its Near Optimality

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6136230)