Distributed proximal gradient algorithm for non-smooth non-convex optimization over time-varying networks

From MaRDI portal
Publication:6362011

arXiv2103.02271MaRDI QIDQ6362011FDOQ6362011


Authors: Xia Jiang, Xianlin Zeng, Jian Sun, Jie Chen Edit this on Wikidata


Publication date: 3 March 2021

Abstract: This note studies the distributed non-convex optimization problem with non-smooth regularization, which has wide applications in decentralized learning, estimation and control. The objective function is the sum of different local objective functions, which consist of differentiable (possibly non-convex) cost functions and non-smooth convex functions. This paper presents a distributed proximal gradient algorithm for the non-smooth non-convex optimization problem over time-varying multi-agent networks. Each agent updates local variable estimate by the multi-step consensus operator and the proximal operator. We prove that the generated local variables achieve consensus and converge to the set of critical points with convergence rate O(1/T). Finally, we verify the efficacy of proposed algorithm by numerical simulations.













This page was built for publication: Distributed proximal gradient algorithm for non-smooth non-convex optimization over time-varying networks

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6362011)