Primal-Dual Algorithm for Distributed Reinforcement Learning: Distributed GTD

From MaRDI portal
Publication:6299396

arXiv1803.08031MaRDI QIDQ6299396FDOQ6299396

Dong Hwan Lee, Hyungjin Yoon, Naira Hovakimyan

Publication date: 21 March 2018

Abstract: The goal of this paper is to study a distributed version of the gradient temporal-difference (GTD) learning algorithm for multi-agent Markov decision processes (MDPs). The temporal difference (TD) learning is a reinforcement learning (RL) algorithm which learns an infinite horizon discounted cost function (or value function) for a given fixed policy without the model knowledge. In the distributed RL case each agent receives local reward through a local processing. Information exchange over sparse communication network allows the agents to learn the global value function corresponding to a global reward, which is a sum of local rewards. In this paper, the problem is converted into a constrained convex optimization problem with a consensus constraint. Then, we propose a primal-dual distributed GTD algorithm and prove that it almost surely converges to a set of stationary points of the optimization problem.













This page was built for publication: Primal-Dual Algorithm for Distributed Reinforcement Learning: Distributed GTD

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6299396)