Numerical Methods for Distributed Stochastic Compositional Optimization Problems with Aggregative Structure

From MaRDI portal
Publication:6416596

arXiv2211.04532MaRDI QIDQ6416596FDOQ6416596

Shengchao Zhao, Yongchao Liu

Publication date: 3 November 2022

Abstract: The paper studies the distributed stochastic compositional optimization problems over networks, where all the agents' inner-level function is the sum of each agent's private expectation function. Focusing on the aggregative structure of the inner-level function, we employ the hybrid variance reduction method to obtain the information on each agent's private expectation function, and apply the dynamic consensus mechanism to track the information on each agent's inner-level function. Then by combining with the standard distributed stochastic gradient descent method, we propose a distributed aggregative stochastic compositional gradient descent method. When the objective function is smooth, the proposed method achieves the optimal convergence rate mathcalOleft(K1/2ight). We further combine the proposed method with the communication compression and propose the communication compressed variant distributed aggregative stochastic compositional gradient descent method. The compressed variant of the proposed method maintains the optimal convergence rate mathcalOleft(K1/2ight). Simulated experiments on decentralized reinforcement learning verify the effectiveness of the proposed methods.












This page was built for publication: Numerical Methods for Distributed Stochastic Compositional Optimization Problems with Aggregative Structure

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6416596)