Stability of Stochastic Approximations With “Controlled Markov” Noise and Temporal Difference Learning
From MaRDI portal
Publication:5223776
DOI10.1109/TAC.2018.2874687zbMATH Open1482.93680arXiv1504.06043OpenAlexW2962741973MaRDI QIDQ5223776FDOQ5223776
Arunselvan Ramaswamy, Shalabh Bhatnagar
Publication date: 18 July 2019
Published in: IEEE Transactions on Automatic Control (Search for Journal in Brave)
Abstract: We are interested in understanding stability (almost sure boundedness) of stochastic approximation algorithms (SAs) driven by a `controlled Markov' process. Analyzing this class of algorithms is important, since many reinforcement learning (RL) algorithms can be cast as SAs driven by a `controlled Markov' process. In this paper, we present easily verifiable sufficient conditions for stability and convergence of SAs driven by a `controlled Markov' process. Many RL applications involve continuous state spaces. While our analysis readily ensures stability for such continuous state applications, traditional analyses do not. As compared to literature, our analysis presents a two-fold generalization (a) the Markov process may evolve in a continuous state space and (b) the process need not be ergodic under any given stationary policy. Temporal difference learning (TD) is an important policy evaluation method in reinforcement learning. The theory developed herein, is used to analyze generalized , an important variant of TD. Our theory is also used to analyze a TD formulation of supervised learning for forecasting problems.
Full work available at URL: https://arxiv.org/abs/1504.06043
Cited In (5)
- Finite-sample analysis of nonlinear stochastic approximation with applications in reinforcement learning
- Stochastic Approximation With Iterate-Dependent Markov Noise Under Verifiable Conditions in Compact State Space With the Stability of Iterates Not Ensured
- Revisiting the ODE method for recursive algorithms: fast convergence using quasi stochastic approximation
- Eligibility traces and forgetting factor in recursive least-squares-based temporal difference
- Two Time-Scale Stochastic Approximation with Controlled Markov Noise and Off-Policy Temporal-Difference Learning
This page was built for publication: Stability of Stochastic Approximations With “Controlled Markov” Noise and Temporal Difference Learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5223776)