A Non-asymptotic Analysis of Non-parametric Temporal-Difference Learning

From MaRDI portal
Publication:6400002

arXiv2205.11831MaRDI QIDQ6400002FDOQ6400002


Authors: Eloïse Berthier, Ziad Kobeissi, Francis Bach Edit this on Wikidata


Publication date: 24 May 2022

Abstract: Temporal-difference learning is a popular algorithm for policy evaluation. In this paper, we study the convergence of the regularized non-parametric TD(0) algorithm, in both the independent and Markovian observation settings. In particular, when TD is performed in a universal reproducing kernel Hilbert space (RKHS), we prove convergence of the averaged iterates to the optimal value function, even when it does not belong to the RKHS. We provide explicit convergence rates that depend on a source condition relating the regularity of the optimal value function to the RKHS. We illustrate this convergence numerically on a simple continuous-state Markov reward process.













This page was built for publication: A Non-asymptotic Analysis of Non-parametric Temporal-Difference Learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6400002)