Localization and approximations for distributed non-convex optimization
From MaRDI portal
Publication:6191109
DOI10.1007/s10957-023-02328-8arXiv1706.02599MaRDI QIDQ6191109
Publication date: 9 February 2024
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1706.02599
Cites Work
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Approximation and regularization of arbitrary functions in Hilbert spaces by the Lasry-Lions method
- A remark on regularization in Hilbert spaces
- Techniques of variational analysis
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- Variational Analysis
- Parallel Selective Algorithms for Nonconvex Big Data Optimization
- Fastest Mixing Markov Chain on a Graph
- Distributed Subgradient Methods for Multi-Agent Optimization
- On the Convergence Time of Asynchronous Distributed Quantized Averaging Algorithms
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- Convergence of a Multi-Agent Projected Stochastic Gradient Algorithm for Non-Convex Optimization
- On the Convergence of Block Coordinate Descent Type Methods
- The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent