Fast Decentralized Nonconvex Finite-Sum Optimization with Recursive Variance Reduction
From MaRDI portal
Publication:5026835
DOI10.1137/20M1361158zbMath1484.90090arXiv2008.07428OpenAlexW4225530789MaRDI QIDQ5026835
Ran Xin, Usman A. Khan, Soummya Kar
Publication date: 8 February 2022
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2008.07428
stochastic optimizationnonconvex optimizationvariance reductiondecentralized optimizationgradient tracking
Nonconvex programming, global optimization (90C26) Stochastic programming (90C15) Multi-agent systems (93A16)
Related Items
DESTRESS: Computation-Optimal and Communication-Efficient Decentralized Nonconvex Finite-Sum Optimization ⋮ A stochastic averaging gradient algorithm with multi‐step communication for distributed optimization ⋮ A variance-reduced stochastic gradient tracking algorithm for decentralized optimization with orthogonality constraints ⋮ Distributed stochastic gradient tracking methods with momentum acceleration for non-convex optimization
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Penalized likelihood regression for generalized linear models with non-quadratic penalties
- Distributed strategies for generating weight-balanced and doubly stochastic digraphs
- Distributed stochastic subgradient projection algorithms for convex optimization
- Lectures on convex optimization
- Distributed stochastic gradient tracking methods
- Communication-efficient algorithms for decentralized and stochastic optimization
- Distributed nonconvex constrained optimization over time-varying digraphs
- DSA: Decentralized Double Stochastic Averaging Gradient Algorithm
- On the Convergence of Decentralized Gradient Descent
- On Convergence Rate of Distributed Stochastic Gradient Algorithm for Convex Optimization with Inequality Constraints
- On the Learning Behavior of Adaptive Networks—Part I: Transient Analysis
- Robust Stochastic Approximation Approach to Stochastic Programming
- Probability with Martingales
- Decentralized Frank–Wolfe Algorithm for Convex and Nonconvex Problems
- Asymptotic Properties of Primal-Dual Algorithm for Distributed Stochastic Optimization over Random Networks with Imperfect Communications
- Diffusion Adaptation Strategies for Distributed Optimization and Learning Over Networks
- Swarming for Faster Convergence in Stochastic Optimization
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Variance-Reduced Stochastic Learning by Networked Agents Under Random Reshuffling
- Harnessing Smoothness to Accelerate Distributed Optimization
- Optimization Methods for Large-Scale Machine Learning
- Convergence of Distributed Stochastic Variance Reduced Methods Without Sampling Extra Data
- On the Influence of Bias-Correction on Distributed Stochastic Optimization
- Variance-Reduced Decentralized Stochastic Optimization With Accelerated Convergence
- An Improved Convergence Analysis for Decentralized Online Stochastic Non-Convex Optimization
- Distributed Subgradient-Free Stochastic Optimization Algorithm for Nonsmooth Convex Functions over Time-Varying Networks
- A Decentralized Proximal-Gradient Method With Network Independent Step-Sizes and Separated Convergence Rates
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Inexact SARAH algorithm for stochastic optimization