Decentralized personalized federated learning: lower bounds and optimal algorithm for all personalization modes
DOI10.1016/j.ejco.2022.100041zbMath1530.90103arXiv2107.07190OpenAlexW4295413102MaRDI QIDQ6170035
Abdurakhmon Sadiev, Aleksandr Beznosikov, Savelii Chezhegov, Rachael Tappenden, Ekaterina Borodich, A. V. Gasnikov, Martin Takáč, Darina Dvinskikh
Publication date: 12 July 2023
Published in: EURO Journal on Computational Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2107.07190
lower and upper boundsdecentralized optimizationdistributed optimizationfederated learningaccelerated algorithms
Convex programming (90C25) Nonlinear programming (90C30) Learning and adaptive systems in artificial intelligence (68T05)
Cites Work
- Introductory lectures on convex optimization. A basic course.
- Eigenvalues of the Laplacian of a graph∗
- Distributed Subgradient Methods for Multi-Agent Optimization
- Advances and Open Problems in Federated Learning
- Decentralized Accelerated Gradient Methods With Increasing Penalty Parameters
- An Optimal Algorithm for Decentralized Finite-Sum Optimization
- Optimal Algorithms for Non-Smooth Distributed Optimization in Networks
- Understanding Machine Learning
- Unnamed Item
This page was built for publication: Decentralized personalized federated learning: lower bounds and optimal algorithm for all personalization modes