Localization and approximations for distributed non-convex optimization
From MaRDI portal
Publication:6191109
DOI10.1007/S10957-023-02328-8arXiv1706.02599MaRDI QIDQ6191109
Publication date: 9 February 2024
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Abstract: Distributed optimization has many applications, in communication networks, sensor networks, signal processing, machine learning, and artificial intelligence. Methods for distributed convex optimization are widely investigated, while those for non-convex objectives are not well understood. One of the first non-convex distributed optimization frameworks over an arbitrary interaction graph was proposed by Di Lorenzo and Scutari [IEEE Trans. on Signal and Information Processing over Network, 2 (2016), pp. 120-136], which iteratively applies a combination of local optimization with convex approximations and local averaging. We generalize the existing results in two ways. In the case when the decision variables are separable such that there is partial dependency in the objectives, we reduce the communication complexity of the algorithm so that nodes only keep and communicate local variables instead of the whole vector of variables. In addition, we relax the assumption that the objectives' gradients are bounded and Lipschitz by means of successive proximal approximations. Having developed the methodology, we then discuss many ways to apply our algorithmic framework to resource allocation problems in multi-cellular networks, where the two generalizations are found useful and practical. Simulation results show the superiority of our resource allocation algorithms over naive single cell methods, and furthermore, our approximation framework lead to algorithms that are numerically more stable.
Full work available at URL: https://arxiv.org/abs/1706.02599
Cites Work
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Approximation and regularization of arbitrary functions in Hilbert spaces by the Lasry-Lions method
- A remark on regularization in Hilbert spaces
- Techniques of variational analysis
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- Variational Analysis
- Parallel Selective Algorithms for Nonconvex Big Data Optimization
- Fastest Mixing Markov Chain on a Graph
- Distributed Subgradient Methods for Multi-Agent Optimization
- On the Convergence Time of Asynchronous Distributed Quantized Averaging Algorithms
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- Convergence of a Multi-Agent Projected Stochastic Gradient Algorithm for Non-Convex Optimization
- On the Convergence of Block Coordinate Descent Type Methods
- The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent
This page was built for publication: Localization and approximations for distributed non-convex optimization